You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: livebooks/readme.livemd
+33-27
Original file line number
Diff line number
Diff line change
@@ -2,6 +2,11 @@
2
2
3
3
# DataSchema
4
4
5
+
## Dependencies
6
+
7
+
```elixir
8
+
Mix.install([{:data_schema, path:"./"}])
9
+
```
5
10
6
11
Data schemas are declarative descriptions of how to create a struct from some input data. You can set up different schemas to handle different kinds of input data. By default we assume the incoming data is a map, but you can configure schemas to work with any arbitrary data input including XML and json.
7
12
@@ -35,22 +40,22 @@ There are 5 kinds of struct fields we could want:
35
40
2.`list_of` - The value will be a list of casted values created from the source data.
36
41
3.`has_one` - The value will be created from a nested data schema (so will be a struct)
37
42
4.`has_many` - The value will be created by casting a list of values into a data schema.
38
-
(You end up with a list of structs defined by the provided schema). Similar to has_many in ecto
43
+
(You end up with a list of structs defined by the provided schema). Similar to has_many in ecto
39
44
5.`aggregate` - The value will be a casted value formed from multiple bits of data in the source.
40
45
41
46
Available options are:
42
47
43
48
*`:optional?` - specifies whether or not the field in the struct should be included in
44
-
the `@enforce_keys` for the struct. By default all fields are required but you can mark
45
-
them as optional by setting this to `true`. This will also be checked when creating a
46
-
struct with `DataSchema.to_struct/2` returning an error if the required field is null.
49
+
the `@enforce_keys` for the struct. By default all fields are required but you can mark
50
+
them as optional by setting this to `true`. This will also be checked when creating a
51
+
struct with `DataSchema.to_struct/2` returning an error if the required field is null.
47
52
*`:empty_values` - allows you to define what values should be used as "empty" for a
48
-
given field. If either the value returned from the data accessor or the casted value are
49
-
equivalent to any element in this list, that field is deemed to be empty. Defaults to `[nil]`,
50
-
meaning nil is always considered "empty".
53
+
given field. If either the value returned from the data accessor or the casted value are
54
+
equivalent to any element in this list, that field is deemed to be empty. Defaults to `[nil]`,
55
+
meaning nil is always considered "empty".
51
56
*`:default` - specifies a 0 arity function that will be called to produce a default value for a field
52
-
when casting. This function will only be called if a field is found to be empty AND optional.
53
-
If it's empty and not optional we will error.
57
+
when casting. This function will only be called if a field is found to be empty AND optional.
To see this better let's look at a very simple example. Assume our input data looks like this:
84
94
85
95
```elixir
@@ -93,7 +103,6 @@ source_data = %{
93
103
}
94
104
```
95
105
96
-
97
106
And now let's assume the struct we wish to make is this one:
98
107
99
108
```elixir
@@ -105,7 +114,6 @@ And now let's assume the struct we wish to make is this one:
105
114
}
106
115
```
107
116
108
-
109
117
We can describe the following schemas to enable this:
110
118
111
119
```elixir
@@ -137,7 +145,6 @@ defmodule BlogPost do
137
145
end
138
146
```
139
147
140
-
141
148
Then to transform some input data into the desired struct we can call `DataSchema.to_struct/2` which works recursively to transform the input data into the struct defined by the schema.
As we mentioned before we want to be able to handle multiple different kinds of source data in our schemas. For each type of source data we want to be able to specify how you access the data for each field type. We do that by providing a "data accessor" (a module that implements the `DataSchema.DataAccessBehaviour`) when we create the schema. We do this by providing a `@data_accessor` on the schema. By default if you do not provide this module attribute we use `DataSchema.MapAccessor`. That means the above example is equivalent to doing the following:
@@ -236,7 +241,6 @@ defmodule DataSchema.MapAccessor do
236
241
end
237
242
```
238
243
239
-
240
244
To save repeating `@data_accessor DataSchema.MapAccessor` on all of your schemas you could use a `__using__` macro like so:
241
245
242
246
```elixir
@@ -260,7 +264,6 @@ defmodule DraftPost do
260
264
end
261
265
```
262
266
263
-
264
267
This means should we want to change how we access data (say we wanted to use `Map.fetch!` instead of `Map.get`) we would only need to change the accessor used in one place - inside the `__using__` macro. It also gives you a handy place to provide other functions for the structs that get created, perhaps implementing a default Inspect protocol implementation for example:
265
268
266
269
```elixir
@@ -284,7 +287,6 @@ defmodule MapSchema do
284
287
end
285
288
```
286
289
287
-
288
290
This could help ensure you never log sensitive fields by requiring you to explicitly implement an inspect protocol for a struct in order to see the fields in it.
289
291
290
292
### XML Schemas
@@ -318,7 +320,6 @@ defmodule XpathAccessor do
318
320
end
319
321
```
320
322
321
-
322
323
As we can see our accessor uses the library [Sweet XML](https://github.com/kbrw/sweet_xml) to access the XML. That means if we wanted to change the library later we would only need to alter this one module for all of our schemas to benefit from the change.
0 commit comments