Skip to content

Latest commit

 

History

History
1182 lines (843 loc) · 31.8 KB

migration-2.0.md

File metadata and controls

1182 lines (843 loc) · 31.8 KB

Migrating from 1.x to 2.0

The minimum supported version of Go for v2 is 1.18.

To upgrade imports of the Go Driver from v1 to v2, we recommend using marwan-at-work/mod :

mod upgrade --mod-name=go.mongodb.org/mongo-driver

Description Package

The description package has been removed in v2.

Event Package

References to description.Server and description.Topology have been replaced with event.ServerDescription and event.TopologyDescription, respectively. Additionally, the following changes have been made to the fields:

  • Kind has been changed from uint32 to string for ease of use.
  • SessionTimeoutMinutes has been changed from uint32 to *int64 to differentiate between a zero timeout and no timeout.

The following event constants have been renamed to match their string literal value:

  • PoolCreated to ConnectionPoolCreated
  • PoolReady to ConnectionPoolRead
  • PoolCleared to ConnectionPoolCleared
  • PoolClosedEvent to ConnectionPoolClosed
  • GetStarted to ConnectionCheckOutStarted
  • GetFailed to ConnectionCheckOutFailed
  • GetSucceeded to ConnectionCheckedOut
  • ConnectionReturned to ConnectionCheckedIn

CommandFailedEvent

CommandFailedEvent.Failure has been converted from a string type to an error type to convey additional error information and to match the type of other *.Failure fields, like ServerHeartbeatFailedEvent.Failure.

CommandFinishedEvent

The type for ServerConnectionID has been changed to *int64 to prevent an int32 overflow and to differentiate between an ID of 0 and no ID.

CommandStartedEvent

The type for ServerConnectionID has been changed to *int64 to prevent an int32 overflow and to differentiate between an ID of 0 and no ID.

ServerHeartbeatFailedEvent

DurationNanos has been removed in favor of Duration.

ServerHeartbeatSucceededEvent

DurationNanos has been removed in favor of Duration.

Mongo Package

Client

Connect

Client.Connect() has been removed in favor of mongo.Connect(). See the section on NewClient for more details.

The context.Context parameter has been removed from mongo.Connect() because the deployment connector doesn’t accept a context, meaning that the context passed to mongo.Connect() in previous versions didn't serve a purpose.

UseSession[WithOptions]

This example shows how to construct a session object from a context, rather than using a context to perform session operations.

// v1

client.UseSession(context.TODO(), func(sctx mongo.SessionContext) error {
  if err := sctx.StartTransaction(options.Transaction()); err != nil {
    return err
  }

  _, err = coll.InsertOne(context.TODO(), bson.D{{"x", 1}})

  return err
})
// v2

client.UseSession(context.TODO(), func(ctx context.Context) error {
  sess := mongo.SessionFromContext(ctx)

  if err := sess.StartTransaction(options.Transaction()); err != nil {
    return err
  }

  _, err = coll.InsertOne(context.TODO(), bson.D{{"x", 1}})

  return err
})

Collection

Clone

This example shows how to migrate usage of the collection.Clone method, which no longer returns an error.

// v1

clonedColl, err := coll.Clone(options.Collection())
if err != nil {
  log.Fatalf("failed to clone collection: %v", err)
}
// v2

clonedColl := coll.Clone(options.Collection())

Distinct

The Distinct() collection method returns a struct that can be decoded, similar to Collection.FindOne. Instead of iterating through an untyped slice, users can decode same-type data using conventional Go syntax.

If the data returned is not same-type (i.e. name is not always a string) a user can iterate through the result directly as a bson.RawArray type:

// v1

filter := bson.D{{"age", bson.D{{"$gt", 25}}}}

values, err := coll.Distinct(context.TODO(), "name", filter)
if err != nil {
  log.Fatalf("failed to get distinct values: %v", err)
}

people := make([]any, 0, len(values))
for _, value := range values {
  people = append(people, value)
}

fmt.Printf("car-renting persons: %v\n", people)
// v2

filter := bson.D{{"age", bson.D{{"$gt", 25}}}}

res := coll.Distinct(context.TODO(), "name", filter)
if err := res.Err(); err != nil {
  log.Fatalf("failed to get distinct result: %v", err)
}

var people []string
if err := res.Decode(&people); err != nil {
  log.Fatal("failed to decode distinct result: %v", err)
}

fmt.Printf("car-renting persons: %v\n", people)

If the data returned is not same-type (i.e. name is not always a string) a user can iterate through the result directly as a bson.RawArray type:

// v2

filter := bson.D{{"age", bson.D{{"$gt", 25}}}}
distinctOpts := options.Distinct().SetMaxTime(2 * time.Second)

res := coll.Distinct(context.TODO(), "name", filter, distinctOpts)
if err := res.Err(); err != nil {
  log.Fatalf("failed to get distinct result: %v", err)
}

rawArr, err := res.Raw()
if err != nil {
  log.Fatalf("failed to get raw data: %v", err)
}

values, err := rawArr.Values()
if err != nil {
  log.Fatalf("failed to get values: %v", err)
}

people := make([]string, 0, len(rawArr))
for _, value := range values {
  people = append(people, value.String())
}

fmt.Printf("car-renting persons: %v\n", people)

InsertMany

The documents parameter in the Collection.InsertMany function signature has been changed from an []interface{} type to an any type. This API no longer requires users to copy existing slice of documents to an []interface{} slice.

// v1

books := []book{
  {
    Name:   "Don Quixote de la Mancha",
    Author: "Miguel de Cervantes",
  },
  {
    Name:   "Cien años de soledad",
    Author: "Gabriel García Márquez",
  },
  {
    Name:   "Crónica de una muerte anunciada",
    Author: "Gabriel García Márquez",
  },
}

booksi := make([]interface{}, len(books))
for i, book := range books {
  booksi[i] = book
}

_, err = collection.InsertMany(ctx, booksi)
if err != nil {
  log.Fatalf("could not insert Spanish authors: %v", err)
}
// v2

books := []book{
  {
    Name:   "Don Quixote de la Mancha",
    Author: "Miguel de Cervantes",
  },
  {
    Name:   "Cien años de soledad",
    Author: "Gabriel García Márquez",
  },
  {
    Name:   "Crónica de una muerte anunciada",
    Author: "Gabriel García Márquez",
  },
}

Database

ListCollectionSpecifications

ListCollectionSpecifications() returns a slice of structs instead of a slice of pointers.

// v1

var specs []*mongo.CollectionSpecification
specs, _ = db.ListCollectionSpecifications(context.TODO(), bson.D{})
// v2

var specs []mongo.CollectionSpecification
specs, _ = db.ListCollectionSpecifications(context.TODO(), bson.D{})

ErrUnacknowledgedWrite

This sentinel error has been removed from the mongo package. Users that need to check if a write operation was unacknowledged can do so by inspecting the Acknowledged field on the associated struct:

  • BulkWriteResult
  • DeleteResult
  • InsertManyResult
  • InsertOneResult
  • RewrapManyDataKeyResult
  • SingleResult
// v1

res, err := coll.InsertMany(context.TODO(), books)
if errors.Is(err, mongo.ErrUnacknowledgedWrite) {
  // Do something
}
// v2

res, err := coll.InsertMany(context.TODO(), books)
if !res.Acknowledged {
  // Do something
}

DDL commands such as dropping a collection will no longer return ErrUnacknowledgedWrite, nor will they return a result type that can be used to determine acknowledgement. It is recommended not to perform DDL commands with an unacknowledged write concern.

Cursor

Cursor.SetMaxTime has been renamed to Cursor.SetMaxAwaitTime, specifying the maximum time for the server to wait for new documents that match the tailable cursor query on a capped collection.

GridFS

The gridfs package has been merged into the mongo package. Additionally, gridfs.Bucket has been renamed to mongo.GridFSBucket

// v1

var bucket gridfs.Bucket
bucket, _ = gridfs.NewBucket(db, opts)
// v2

var bucket mongo.GridFSBucket
bucket, _ = db.GridFSBucket(opts)

ErrWrongIndex

ErrWrongIndex has been renamed to the more intuitive ErrMissingChunk, which indicates that the number of chunks read from the server is less than expected.

// v1

n, err := source.Read(buf)
if errors.Is(err, gridfs.ErrWrongIndex) {
  // Do something
}
// v2

n, err := source.Read(buf)
if errors.Is(err, mongo.ErrMissingChunk) {
  // Do something
}

SetWriteDeadline

SetWriteDeadline methods have been removed from GridFS operations in favor of extending bucket methods to include a context.Context argument.

// v1

uploadStream, _ := bucket.OpenUploadStream("filename", uploadOpts)
uploadStream.SetWriteDeadline(time.Now().Add(2*time.Second))
// v2

ctx, cancel := context.WithTimeout(context.TODO(), 2*time.Second)
defer cancel()

uploadStream, _ := bucket.OpenUploadStream(ctx, "filename", uploadOpts)

Additionally, Bucket.DeleteContext(), Bucket.FindContext(), Bucket.DropContext(), and Bucket.RenameContext() have been removed.

IndexOptionsBuilder

mongo.IndexOptionsBuilder has been removed, use the IndexOptions type in the options package instead.

IndexView

DropAll

Dropping an index replies with a superset of the following message: {nIndexesWas: n}, where n indicates the number of indexes there were prior to removing whichever index(es) were dropped. In the case of DropAll this number is always m - 1, where m is the total number of indexes as you cannot delete the index on the _id field. Thus, we can simplify the DropAll method by removing the server response.

// v1

res, err := coll.Indexes().DropAll(context.TODO())
if err != nil {
  log.Fatalf("failed to drop indexes: %v", err)
}

type dropResult struct {
  NIndexesWas int
}

dres := dropResult{}
if err := bson.Unmarshal(res, &dres); err != nil {
  log.Fatalf("failed to decode: %v", err)
}

numDropped := dres.NIndexWas

// Use numDropped
// v2

// List the indexes
cur, err := coll.Indexes().List(context.TODO())
if err != nil {
  log.Fatalf("failed to list indexes: %v", err)
}

numDropped := 0
for cur.Next(context.TODO()) {
  numDropped++
}

if err := coll.Indexes().DropAll(context.TODO()); err != nil {
  log.Fatalf("failed to drop indexes: %v", err)
}

// List the indexes
cur, err := coll.Indexes().List(context.TODO())
if err != nil {
  log.Fatalf("failed to list indexes: %v", err)
}

// Use numDropped

DropOne

Dropping an index replies with a superset of the following message: {nIndexesWas: n}, where n indicates the number of indexes there were prior to removing whichever index(es) were dropped. In the case of DropOne this number is always 1 in non-error cases. We can simplify the DropOne method by removing the server response.

ListSpecifications

Updated to return a slice of structs instead of a slice of pointers. See the database analogue for migration guide.

NewClient

NewClient has been removed, use the Connect function in the mongo package instead.

client, err := mongo.NewClient(options.Client())
if err != nil {
  log.Fatalf("failed to create client: %v", err)
}

if err := client.Connect(context.TODO()); err != nil {
  log.Fatalf("failed to connect to server: %v", err)
}
client, err := mongo.Connect(options.Client())
if err != nil {
  log.Fatalf("failed to connect to server: %v", err)
}

Session

Uses of mongo.Session through driver constructors (such as client.StartSession) have been changed to return a pointer to a mongo.Session struct and will need to be updated accordingly.

// v1

var sessions []mongo.Session
for i := 0; i < numSessions; i++ {
  sess, _ := client.StartSession()
  sessions = append(sessions, sess)
}
// v2

var sessions []*mongo.Session
for i := 0; i < numSessions; i++ {
  sess, _ := client.StartSession()
  sessions = append(sessions, sess)
}

SingleResult

SingleResult.DecodeBytes has been renamed to the more intuitive SingleResult.Raw.

WithSession

This example shows how to update the callback for mongo.WithSession to use a context.Context implementation, rather than the custom mongo.SessionContext.

// v1

mongo.WithSession(context.TODO(),sess,func(sctx mongo.SessionContext) error {
  // Callback
  return nil
})
// v2

mongo.WithSession(context.TODO(),sess,func(ctx context.Context) error {
  // Callback
  return nil
})

Options Package

The following fields were marked for internal use only and do not have replacement:

  • ClientOptions.AuthenticateToAnything
  • FindOptions.OplogReplay
  • FindOneOptions.OplogReplay

The following fields were removed because they are no longer supported by the server

  • FindOptions.Snapshot (4.0)
  • FindOneOptions.Snapshot (4.0)
  • IndexOptions.Background (4.2)

Options

The Go Driver offers users the ability to pass multiple options objects to operations in a last-on-wins algorithm, merging data at a field level:

function MergeOptions(target, optionsList):
  for each options in optionsList:
    if options is null or undefined:
      continue

  for each key, value in options:
    if value is not null or undefined:
      target[key] = value

  return target

Currently, the driver maintains this logic for every options type, e.g. MergeClientOptions. For v2, we’ve decided to abstract the merge functions by changing the options builder pattern to maintain a slice of setter functions, rather than setting data directly to an options object. Typical usage of options should not change, for example the following is still honored:

opts1 := options.Client().SetAppName("appName")
opts2 := options.Client().SetConnectTimeout(math.MaxInt64)

_, err := mongo.Connect(opts1, opts2)
if err != nil {
	panic(err)
}

There are two notable cases that will require a migration: (1) modifying options data after building, and (2) creating a slice of options objects.

Modifying Fields After Building

The options builder is now a slice of setters, rather than a single options object. In order to modify the data after building, users will need to create a custom setter function and append the builder’s Opts slice:

// v1

opts := options.Client().ApplyURI("mongodb://x:y@localhost:27017")

if opts.Auth.Username == "x" {
  opts.Auth.Password = "z"
}
// v2

opts := options.Client().ApplyURI("mongodb://x:y@localhost:27017")

// If the username is "x", use password "z"
pwSetter := func(opts *options.ClientOptions) error {
  if opts.Auth.Username == "x" {
    opts.Auth.Password = "z"
  }

  return nil
}

opts.Opts = append(opts.Opts, pwSetter)

Creating a Slice of Options

Using options created with the builder pattern as elements in a slice:

// v1

opts1 := options.Client().SetAppName("appName")
opts2 := options.Client().SetConnectTimeout(math.MaxInt64)

opts := []*options.ClientOptions{opts1, opts2}
_, err := mongo.Connect(opts...)
// v2

opts1 := options.Client().SetAppName("appName")
opts2 := options.Client().SetConnectTimeout(math.MaxInt64)

// Option objects are "Listers" in v2, objects that hold a list of setters
opts := []options.Lister[options.ClientOptions]{opts1, opts2}
_, err := mongo.Connect(opts...)

Creating Options from Builder

Since a builder is just a slice of option setters, users can create options directly from a builder:

// v1

opt := &options.ClientOptions{}
opt.ApplyURI(uri)

return clientOptionAdder{option: opt}
// v2

var opts options.ClientOptions
for _, set := range options.Client().ApplyURI(uri).Opts {
  _ = set(&opts)
}

return clientOptionAdder{option: &opts}

FindOneOptions

The following types are not valid for a findOne operation and have been removed:

  • BatchSize
  • CursorType
  • MaxAwaitTime
  • NoCursorTimeout

Merge*Options

All functions that merge options have been removed in favor of a generic solution. See GODRIVER-2696 for more information.

MaxTime

Users should time out operations using either the client-level operation timeout defined by ClientOptions.Timeout or by setting a deadline on the context object passed to an operation. The following fields and methods have been removed:

  • AggregateOptions.MaxTime and AggregateOptions.SetMaxTime
  • ClientOptions.SocketTimeout and ClientOptions.SetSocketTimeout
  • CountOptions.MaxTime and CountOptions.SetMaxTime
  • DistinctOptions.MaxTime and DistinctOptions.SetMaxTime
  • EstimatedDocumentCountOptions.MaxTime and EstimatedDocumentCountOptions.SetMaxTime
  • FindOptions.MaxTime and FindOptions.SetMaxTime
  • FindOneOptions.MaxTime and FindOneOptions.SetMaxTime
  • FindOneAndReplaceOptions.MaxTime and FindOneAndReplaceOptions.SetMaxTime
  • FindOneAndUpdateOptions.MaxTime and FindOneAndUpdateOptions.SetMaxTime
  • GridFSFindOptions.MaxTime and GridFSFindOptions.SetMaxTime
  • CreateIndexesOptions.MaxTime and CreateIndexesOptions.SetMaxTime
  • DropIndexesOptions.MaxTime and DropIndexesOptions.SetMaxTime
  • ListIndexesOptions.MaxTime and ListIndexesOptions.SetMaxTime
  • SessionOptions.DefaultMaxCommitTime and SessionOptions.SetDefaultMaxCommitTime
  • TransactionOptions.MaxCommitTime and TransactionOptions.SetMaxCommitTime

This example illustrates how to define an operation-level timeout using v2, without loss of generality.

SessionOptions

DefaultReadConcern, DefaultReadPreference, and DefaultWriteConcern are all specific to transactions started by the session. Rather than maintain three fields on the Session struct, v2 has combined these options into DefaultTransactionOptions which specifies a TransactionOptions object.

// v1

sessOpts := options.Session().SetDefaultReadPreference(readpref.Primary())
// v2

txnOpts := options.Transaction().SetReadPreference(readpref.Primary())
sessOpts := options.Session().SetDefaultTransactionOptions(txnOpts)

Read Concern Package

The Option type and associated builder functions have been removed in v2 in favor of a ReadConcern literal declaration.

Options Builder

This example shows how to update usage of New() and Level() options builder with a ReadConcern literal declaration.

// v1

localRC := readconcern.New(readconcern.Level("local"))
// v2

localRC := &readconcern.ReadConcern{Level: "local"}

ReadConcern.GetLevel()

The ReadConcernGetLevel() method has been removed. Use the ReadConcern.Level field to get the level instead.

Write Concern Package

WTimeout

The WTimeout field has been removed from the WriteConcern struct. Instead, users should define a timeout at the operation-level using a context object.

Bsoncodecs / Bsonoptions Package

*Codec structs and New*Codec methods have been removed. Additionally, the correlated bson/bsonoptions package has been removed, so codecs are not directly configurable using *CodecOptions structs in Go Driver 2.0. To configure the encode and decode behavior, use the configuration methods on a bson.Encoder or bson.Decoder. To configure the encode and decode behavior for a mongo.Client, use options.ClientOptionsBuilder.SetBSONOptions with BSONOptions.

This example shows how to set ObjectIDAsHex.

// v1

var res struct {
	ID string
}

codecOpt := bsonoptions.StringCodec().SetDecodeObjectIDAsHex(true)
strCodec := bsoncodec.NewStringCodec(codecOpt)
reg := bson.NewRegistryBuilder().RegisterDefaultDecoder(reflect.String, strCodec).Build()
dc := bsoncodec.DecodeContext{Registry: reg}
dec, err := bson.NewDecoderWithContext(dc, bsonrw.NewBSONDocumentReader(DOCUMENT))
if err != nil {
	panic(err)
}
err = dec.Decode(&res)
if err != nil {
	panic(err)
}
// v2

var res struct {
	ID string
}

decoder := bson.NewDecoder(bson.NewDocumentReader(bytes.NewReader(DOCUMENT)))
decoder.ObjectIDAsHexString()
err := decoder.Decode(&res)
if err != nil {
	panic(err)
}

RegistryBuilder

The RegistryBuilder struct and the bson.NewRegistryBuilder function have been removed in favor of (*bson.Decoder).SetRegistry and (*bson.Encoder).SetRegistry.

StructTag / StructTagParserFunc

The StructTag struct as well as the StructTagParserFunc type have been removed. Therefore, users have to specify BSON tags manually rather than define custom BSON tag parsers.

TransitionError

The TransitionError struct has been merged into the bson package.

Other interfaces removed from bsoncodes package

CodecZeroer and Proxy have been removed.

Bsonrw Package

The bson/bsonrw package has been merged into the bson package.

As a result, interfaces ArrayReader, ArrayWriter, DocumentReader, DocumentWriter, ValueReader, and ValueWriter are located in the bson package now.

Interfaces BytesReader, BytesWriter, and ValueWriterFlusher have been removed.

Functions NewExtJSONValueReader and NewExtJSONValueWriter have been moved to the bson package as well.

Moreover, the ErrInvalidJSON variable has been merged into the bson package.

NewBSONDocumentReader / NewBSONValueReader

The bsonrw.NewBSONDocumentReader has been renamed to NewDocumentReader, which reads from an io.Reader, in the bson package.

The NewBSONValueReader has been removed.

This example creates a Decoder that reads from a byte slice.

// v1

b, _ := bson.Marshal(bson.M{"isOK": true})
decoder, err := bson.NewDecoder(bsonrw.NewBSONDocumentReader(b))
// v2

b, _ := bson.Marshal(bson.M{"isOK": true})
decoder := bson.NewDecoder(bson.NewDocumentReader(bytes.NewReader(b)))

NewBSONValueWriter

The bsonrw.NewBSONValueWriter function has been renamed to NewDocumentWriter in the bson package.

This example creates an Encoder that writes BSON values to a bytes.Buffer.

// v1

buf := new(bytes.Buffer)
vw, err := bsonrw.NewBSONValueWriter(buf)
encoder, err := bson.NewEncoder(vw)
// v2

buf := new(bytes.Buffer)
vw := bson.NewDocumentWriter(buf)
encoder := bson.NewEncoder(vw)

Mgocompat Package

The bson/mgocompat has been simplified. Its implementation has been merged into the bson package.

ErrSetZero has been renamed to ErrMgoSetZero in the bson package.

NewRegistryBuilder function has been simplified to NewMgoRegistry in the bson package.

Similarly, NewRespectNilValuesRegistryBuilder function has been simplified to NewRespectNilValuesMgoRegistry in the bson package.

Primitive Package

The bson/primitive package has been merged into the bson package.

Additionally, the bson.D has implemented the json.Marshaler and json.Unmarshaler interfaces, where it uses a key-value representation in "regular" (i.e. non-Extended) JSON.

The bson.D.String and bson.M.String methods return a relaxed Extended JSON representation of the document.

// v2

d := D{{"a", 1}, {"b", 2}}
fmt.Printf("%s\n", d)
// Output: {"a":{"$numberInt":"1"},"b":{"$numberInt":"2"}}

Bson Package

DefaultRegistry / NewRegistryBuilder

DefaultRegistry has been removed. Using bson.DefaultRegistry to either access the default registry behavior or to globally modify the default registry, will be impacted by this change and will need to configure their registry using another mechanism.

The NewRegistryBuilder function has been removed along with the bsoncodec.RegistryBuilder struct as mentioned above.

Decoder

NewDecoder

The signature of NewDecoder has been updated without an error being returned.

NewDecoderWithContext

NewDecoderWithContext has been removed in favor of using the SetRegistry method to set a registry.

Correspondingly, the following methods have been removed:

  • UnmarshalWithRegistry
  • UnmarshalWithContext
  • UnmarshalValueWithRegistry
  • UnmarshalExtJSONWithRegistry
  • UnmarshalExtJSONWithContext

For example, a boolean type can be stored in the database as a BSON boolean or 32/64-bit integer. Given a registry:

type lenientBool bool

lenientBoolType := reflect.TypeOf(lenientBool(true))

lenientBoolDecoder := func(
	dc bsoncodec.DecodeContext,
	vr bsonrw.ValueReader,
	val reflect.Value,
) error {
	// All decoder implementations should check that val is valid, settable,
	// and is of the correct kind before proceeding.
	if !val.IsValid() || !val.CanSet() || val.Type() != lenientBoolType {
		return bsoncodec.ValueDecoderError{
			Name:     "lenientBoolDecoder",
			Types:    []reflect.Type{lenientBoolType},
			Received: val,
		}
	}

	var result bool
	switch vr.Type() {
	case bsontype.Boolean:
		b, err := vr.ReadBoolean()
		if err != nil {
			return err
		}
		result = b
	case bsontype.Int32:
		i32, err := vr.ReadInt32()
		if err != nil {
			return err
		}
		result = i32 != 0
	case bsontype.Int64:
		i64, err := vr.ReadInt64()
		if err != nil {
			return err
		}
		result = i64 != 0
	default:
		return fmt.Errorf(
			"received invalid BSON type to decode into lenientBool: %s",
			vr.Type())
	}

	val.SetBool(result)
	return nil
}

// Create the registry
reg := bson.NewRegistry()
reg.RegisterTypeDecoder(
	lenientBoolType,
	bsoncodec.ValueDecoderFunc(lenientBoolDecoder))

For our custom decoder with such a registry, BSON 32/64-bit integer values are considered true if they are non-zero.

// v1
// Use UnmarshalWithRegistry

// Marshal a BSON document with a single field "isOK" that is a non-zero
// integer value.
b, err := bson.Marshal(bson.M{"isOK": 1})
if err != nil {
	panic(err)
}

// Now try to decode the BSON document to a struct with a field "IsOK" that
// is type lenientBool. Expect that the non-zero integer value is decoded
// as boolean true.
type MyDocument struct {
	IsOK lenientBool `bson:"isOK"`
}
var doc MyDocument
err = bson.UnmarshalWithRegistry(reg, b, &doc)
if err != nil {
	panic(err)
}
fmt.Printf("%+v\n", doc)
// Output: {IsOK:true}
// v1
// Use NewDecoderWithContext

// Marshal a BSON document with a single field "isOK" that is a non-zero
// integer value.
b, err := bson.Marshal(bson.M{"isOK": 1})
if err != nil {
	panic(err)
}

// Now try to decode the BSON document to a struct with a field "IsOK" that
// is type lenientBool. Expect that the non-zero integer value is decoded
// as boolean true.
type MyDocument struct {
	IsOK lenientBool `bson:"isOK"`
}
var doc MyDocument
dc := bsoncodec.DecodeContext{Registry: reg}
dec, err := bson.NewDecoderWithContext(dc, bsonrw.NewBSONDocumentReader(b))
if err != nil {
	panic(err)
}
err = dec.Decode(&doc)
if err != nil {
	panic(err)
}
fmt.Printf("%+v\n", doc)
// Output: {IsOK:true}
// v2
// Use SetRegistry

// Marshal a BSON document with a single field "isOK" that is a non-zero
// integer value.
b, err := bson.Marshal(bson.M{"isOK": 1})
if err != nil {
	panic(err)
}

// Now try to decode the BSON document to a struct with a field "IsOK" that
// is type lenientBool. Expect that the non-zero integer value is decoded
// as boolean true.
type MyDocument struct {
	IsOK lenientBool `bson:"isOK"`
}
var doc MyDocument
dec := bson.NewDecoder(bson.NewDocumentReader(bytes.NewReader(b)))
dec.SetRegistry(reg)
err = dec.Decode(&doc)
if err != nil {
	panic(err)
}
fmt.Printf("%+v\n", doc)
// Output: {IsOK:true}

SetContext

The SetContext method has been removed in favor of using SetRegistry to set the registry of a decoder.

SetRegistry

The signature of SetRegistry has been updated without an error being returned.

Reset

The signature of Reset has been updated without an error being returned.

DefaultDocumentD / DefaultDocumentM

Decoder.DefaultDocumentD has been removed since a document, including a top-level value (e.g. you pass in an empty interface value to Decode), is always decoded into a bson.D by default. Therefore, use Decoder.DefaultDocumentM to always decode a document into a bson.M to avoid unexpected decode results.

ObjectIDAsHexString

Decoder.ObjectIDAsHexString method enables decoding a BSON ObjectId as a hexadecimal string. Otherwise, the decoder returns an error by default instead of decoding as the UTF-8 representation of the raw ObjectId bytes, which results in a garbled and unusable string.

Encoder

NewEncoder

The signature of NewEncoder has been updated without an error being returned.

NewEncoderWithContext

NewEncoderWithContext has been removed in favor of using the SetRegistry method to set a registry.

Correspondingly, the following methods have been removed:

  • MarshalWithRegistry
  • MarshalWithContext
  • MarshalAppend
  • MarshalAppendWithRegistry
  • MarshalAppendWithContext
  • MarshalValueWithRegistry
  • MarshalValueWithContext
  • MarshalValueAppendWithRegistry
  • MarshalValueAppendWithContext
  • MarshalExtJSONWithRegistry
  • MarshalExtJSONWithContext
  • MarshalExtJSONAppendWithRegistry
  • MarshalExtJSONAppendWithContext

Here is an example of a registry that multiplies the input value by -1 when encoding for a negatedInt.

type negatedInt int

negatedIntType := reflect.TypeOf(negatedInt(0))

negatedIntEncoder := func(
	ec bsoncodec.EncodeContext,
	vw bsonrw.ValueWriter,
	val reflect.Value,
) error {
	// All encoder implementations should check that val is valid and is of
	// the correct type before proceeding.
	if !val.IsValid() || val.Type() != negatedIntType {
		return bsoncodec.ValueEncoderError{
			Name:     "negatedIntEncoder",
			Types:    []reflect.Type{negatedIntType},
			Received: val,
		}
	}

	// Negate val and encode as a BSON int32 if it can fit in 32 bits and a
	// BSON int64 otherwise.
	negatedVal := val.Int() * -1
	if math.MinInt32 <= negatedVal && negatedVal <= math.MaxInt32 {
		return vw.WriteInt32(int32(negatedVal))
	}
	return vw.WriteInt64(negatedVal)
}

// Create the registry.
reg := bson.NewRegistry()
reg.RegisterTypeEncoder(
	negatedIntType,
	bsoncodec.ValueEncoderFunc(negatedIntEncoder))

Encode by creating a custom encoder with the registry:

// v1
// Use MarshalWithRegistry

b, err := bson.MarshalWithRegistry(reg, bson.D{{"negatedInt", negatedInt(1)}})
if err != nil {
	panic(err)
}
fmt.Println(bson.Raw(b).String())
// Output: {"negatedint": {"$numberInt":"-1"}}
// v1
// Use NewEncoderWithContext

buf := new(bytes.Buffer)
vw, err := bsonrw.NewBSONValueWriter(buf)
if err != nil {
	panic(err)
}
ec := bsoncodec.EncodeContext{Registry: reg}
enc, err := bson.NewEncoderWithContext(ec, vw)
if err != nil {
	panic(err)
}
err = enc.Encode(bson.D{{"negatedInt", negatedInt(1)}})
if err != nil {
	panic(err)
}
fmt.Println(bson.Raw(buf.Bytes()).String())
// Output: {"negatedint": {"$numberInt":"-1"}}
// v2
// Use SetRegistry

buf := new(bytes.Buffer)
vw := bson.NewDocumentWriter(buf)
enc := bson.NewEncoder(vw)
enc.SetRegistry(reg)
err := enc.Encode(bson.D{{"negatedInt", negatedInt(1)}})
if err != nil {
	panic(err)
}
fmt.Println(bson.Raw(buf.Bytes()).String())
// Output: {"negatedint": {"$numberInt":"-1"}}

SetContext

The SetContext method has been removed in favor of using SetRegistry to set the registry of an encoder.

SetRegistry

The signature of SetRegistry has been updated without an error being returned.

Reset

The signature of Reset has been updated without an error being returned.

RawArray Type

A new RawArray type has been added to the bson package as a primitive type to ​​represent a BSON array. Correspondingly, RawValue.Array() returns a RawArray instead of Raw.

ValueMarshaler

The MarshalBSONValue method of the ValueMarshaler interface is only required to return a byte type value representing the BSON type to avoid importing the bsontype package.

ValueUnmarshaler

The UnmarshalBSONValue method of the ValueUnmarshaler interface is only required to take a byte type argument representing the BSON type to avoid importing the Go driver package.