Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refactor vertex functions to enable composite paths #6560

Open
1 of 32 tasks
GregStanton opened this issue Nov 17, 2023 · 45 comments
Open
1 of 32 tasks

Refactor vertex functions to enable composite paths #6560

GregStanton opened this issue Nov 17, 2023 · 45 comments

Comments

@GregStanton
Copy link
Collaborator

GregStanton commented Nov 17, 2023

Increasing Access

This enhancement will provide greater access to individuals in multiple circumstances.

Code contributors:
For volunteers who have less experience with large codebases or less free time to navigate them, it will become feasible to contribute to vertex-related features. For example, there is community support for a new arcVertex() function, but the complexity of the current codebase has impeded progress.

Users:
For users who want to make composite shapes, the existing p5 API will work as they expect it to, so they will face less confusion. A knowledge barrier will also be removed, since they won't need to learn the native canvas API or SVG in order to make composite shapes.

Most appropriate sub-area of p5.js?

  • Accessibility
  • Color
  • Core/Environment/Rendering
  • Data
  • DOM
  • Events
  • Image
  • IO
  • Math
  • Typography
  • Utilities
  • WebGL
  • Build Process
  • Unit Testing
  • Internalization
  • Friendly Errors
  • Other (specify if possible)

Feature enhancement details

Problem

Mixing path primitives

Users expect to be able to create composite paths with a mix of path primitives (line segments, quadratic Bézier curves, cubic Bézier curves, and Catmull-Rom splines). The p5 reference does not indicate that general mixing is not possible, and it's already possible with native canvas paths and SVG paths.

However, the current p5 implementation assumes shapes include at most one type of nonlinear path. For example, see the 2D renderer's endShape() method. This limitation leads to unexpected behavior in a range of cases. A couple of these are illustrated below.

Expected shape Actual result
Quadratic and cubic Béziers
Lines and Catmull-Rom splines

Complexity of the codebase

With the current implementation, it will be difficult to support mixing vertex types (e.g. #906), to track down bugs (e.g. #6555), and to implement new features (e.g. #6459). The difficulty arises from specific functions as well as broader code organization:

Solution

Separation of concerns

We can solve many of these problems at once by separating concerns. The idea is to create classes for the different vertex types, each with their own data and their own methods that add them to the canvas. Then vertex(), quadraticVertex(), etc. just push vertex instances of the appropriate type into a path array. When endShape() is called, it simply loops over the vertex objects, and they add themselves to the canvas; endShape() doesn't need to know that curveVertex() requires special logic, for example. By refactoring in this way, mixing path primitives is essentially automatic, with no change to the public p5 API.

Proof of concept

I put together a proof of concept in the p5 Web Editor; it includes an implementation of arcVertex() as proposed in issue #6459. A separate sketch shows how it fixes unexpected mixing behavior. With these two enhancements together (arcVertex() and composite paths), p5's beginShape()/endShape() will be able to produce composite paths with all the basic primitives of the native canvas API and SVG, plus Catmull-Rom splines. Based on some discussion, it looks like it will also help with the bug observed in issue #6555. Although the proof of concept focuses on the 2D case, the goal is to accommodate the 3D case as well, with some adjustments.

Update: Task list

Calling all volunteers!

@davepagurek, @limzykenneth, @xanderjl, @Gaurav-1306, @Jaivignesh-afk, @sudhanshuv1, @capGoblin, @lindapaiste

Based on discussions here, in the online meeting, and on Discord, we're going to use this comment as an overall implementation guide, and we've decided to organize the work according to the following tasks:

  • Update reference to explicitly describe the new behaviors
  • Create snapshots of existing behavior, for the visual tests
  • Decide how to refactor normal()*
  • Decide whether to rename default contour kind to SEGMENTS or leave it as null*
  • Increase consistency in usage and documentation of default values
  • Decide on the file structure*
  • Implement Shape, Contour, and primitive shape classes
  • Implement creator functions*
    • Decide if we want wrappers (e.g. createPoint() for () => new Point(\*data*\))
    • Implement wrappers or do nothing if we decide to omit them
  • Implement PrimitiveShapeCreators (a JavaScript Map object)*
  • Implement 2D and 3D output visitor classes
  • Implement drawShape() on 2D and 3D renderers and refactor user-facing functions
  • Develop PointAtLengthGetter**
  • Consider new primitives based on existing combinations of vertex and shape kinds**

*: A separate GitHub Issue does not need to be created.
**: A separate GitHub Issue should be created, but not yet.

For all the unmarked tasks, I'll set up separate issues with more details as soon as I get a chance, and the task list will link to them. Any code should be committed to the new vertex-refactor branch @davepagurek set up, and we'll merge it into the main branch once the time is right. For right now, we can get started by discussing the tasks marked with a single asterisk!

@Gaurav-1306
Copy link
Contributor

I would like to work on this issue.

Just a little hiccup, i can start in the next 2 weeks, so if someone wants to do it before that they can, but after that i would like to work on this issue.
@GregStanton thanks again for the update 😃

@GregStanton
Copy link
Collaborator Author

Hi @Gaurav-1306! It's great that you want to work on this! We may want to break this up into smaller tasks that multiple people can work on. Maybe the first thing is to make a list of people who are interested. Maybe one person can take the lead by dividing up tasks. When I get a chance, I can think more about that, but I'd love to hear from others on this.

@Jaivignesh-afk
Copy link
Contributor

I would like work on this as well!!

@limzykenneth
Copy link
Member

Great write up @GregStanton! I like the idea of having internal classes for each type of vertices and it also makes it possible for addon libraries to implement their own unique vertices that can work with beginShape() and endShape().

As a start can we work out a common interface that each of the classes should adhere to? ie. What properties and methods should they all have? What are the signatures of the methods?

After that or at the same time we can figure out how the internals call the methods/interface. For anyone interested in working on this, please work out a proposal in this issue first. You can test out code implementation in the web editor as @GregStanton has done with the proof of concept. Thanks!

@GregStanton
Copy link
Collaborator Author

GregStanton commented Nov 21, 2023

Thanks for the kind words @limzykenneth, and for helping to direct the effort! I'm really glad you like the idea.

I'm not sure if you noticed, but the code that makes my proof of concept work is all located in shape.js, inside the project's file directory. I think it may provide initial answers to your questions. For example, the vertex classes all have the following interface:

  • data property
    • description: array holding the vertex and other data (e.g. control points) passed to the associated p5 function
    • usage: in the case of CurveVertex objects, this property holds data from consecutive calls to curveVertex()
  • type property
    • description: type of vertex (possible values are constants.VERTEX, constants.QUADRATIC_VERTEX, etc.)
    • usage: curveVertex() function checks type to see if it needs to push data into an existing CurveVertex object
  • coordinates getter
    • description: returns vertex coordinates from data (e.g. in CurveVertex, this is the second-to-last data point)
    • usage: endShape() uses it to keep track of the current point on the path, for use by ArcVertex
  • addToCanvasPath() method
    • description: takes no parameters; adds the corresponding subpath to the canvas
    • usage: called by endShape()

Possible next steps/questions

  1. Does anyone want to extend my proof of concept to the 3D case? This might mean adding a method or getter analogous to addToCanvasPath(), but it would output vertices; @davepagurek mentioned something along these lines on Discord.
  2. Does anyone want to extend my proof of concept so that it supports contours? I think Dave also suggested a nested array that starts with _path as in shape.js and holds additional arrays for the contours after that.
  3. I'm glad you mentioned extensibility; that's a good point to keep in mind. I guess we'd need to expose something like shape.js's _path array in order to allow add-on libraries to create new vertex types? Maybe there's another way.

Edit: Maybe @Gaurav-1306 or @Jaivignesh-afk can divide up points 1. and 2. above, regarding the 3D case and contours?

@limzykenneth
Copy link
Member

I'm glad you mentioned extensibility; that's a good point to keep in mind. I guess we'd need to expose something like shape.js's _path array in order to allow add-on libraries to create new vertex types? Maybe there's another way.

We don't necessarily have to to figure this out at this point. Once we have the implementation working with the flexibility we want, we can revisit at a later point. That is to say it's a nice to have but not must have.

@davepagurek
Copy link
Contributor

For 3D support: in WebGL, each path/contour ends up just as an array of vertices, so each segment would need to be able to convert itself into a polyline. This will also depend on the level of detail, so its interface for WebGL would have to look something like:

interface Segment {
  first: boolean
  addToCanvasPath(ctx: CanvasRenderingContext2D): void
  addToPolyline(polyline: Array<p5.Vector>, curveDetail: number): void
}

Here I've also made this a void function that mutates the array to be consistent with the way the 2D one works, but outputting an array of vertices which we then concatenate to a big list would also work.

I've also added a first property here because I assume for 2D mode, each vertex may need to know if it needs to moveTo or if it needs to lineTo based on whether it's the start of a path/contour.

To support contours, if we have:

type CompositePath = Array<Segment>

...then a full shape would be:

type Shape = Array<CompositePath>

where beginShape() starts off the shape as [[]] (just one composite path, initially empty), and each beginContour call pushes a new empty array to the shape to add a new contour. (The main shape and contours can be treated the same.)

@GregStanton
Copy link
Collaborator Author

Replying to #6560 (comment) by @davepagurek:

Thanks! This is very helpful. There's some tension between the idea of vertex classes and segment classes; I think you were right to switch over to segment classes. The deciding factor for me is that it seems natural to put all the Catmull-Rom vertices in a single object, and it doesn't make sense to think of that as a singular "vertex" object.

Below, I propose a few tweaks that combine (and hopefully refine) the interfaces we each suggested. I'm using a TypeScript-like syntax as an informal shorthand (e.g. get doesn't actually work in TypeScript interfaces, as far as I know).

Proposed interfaces

If these sound good, we can test them by incorporating them into a new version of the proof of concept. (That's good because I don't have the extra time right now to think this through completely! I think it makes sense, though.)

interface PathSegment {
  data: Array<number | string>;
  type: string;
  get endVertex(): p5.Vector;
  addToCanvasPath(ctx: CanvasRenderingContext2D): void;
  addToVertexPath(shape: Shape, curveDetail: number): void;
}

interface Path = {
  segments: Array<PathSegment>;
  currentVertex: p5.Vector;
}

interface Shape {
  boundary: Path;
  vertexBoundary: Array<p5.Vector>;
  contours: Array<Path>;
  vertexContours: Array<Array<p5.Vector>>;
  currentPath: Path;
}

Notes

  • PathSegment
    • first: I omitted this because I think it may be unnecessary? When endShape() loops over the segments in a new path, it knows which of the segments is first, so I think it can call moveTo() before it calls addToCanvasPath(). That's what the proof of concept does without contours, but it seems like it should work for those too.
    • data: Includes everything needed to specify the segment including supporting data (control points, extra vertices for Catmull-Rom splines, ellipse parameters for an arc, etc.) and an ending vertex. The starting vertex doesn't need to be stored, since that can be pulled from the currentVertex property of the relevant Path object.
    • type: May equal LINE, QUADRATIC, BEZIER, or CURVE (later, we can add ARC to the list). An example use is that curveVertex(x, y) can check the current segment's type, and if it's a curve, it can push x and y into its data property instead of starting a new segment.
    • endVertex: Can be pulled from the data property. In a Catmull-Rom spline, it's the second-to-last vertex.
    • addToVertexPath(): Adds vertices to shape.vertexBoundary or shape.vertexContours.
  • Path:
    • Why have a property for the current vertex? In general, it should be helpful to keep track of the ending vertex of the last segment that's been added to the canvas or vertex path; ArcSegment will definitely need it for internal calculations.
    • How can the current vertex be updated? The current vertex can be updated by endShape() after it calls addToCanvasPath() or addToVertexPath().
  • Shape
    • Why separate boundary from contours? Since p5 publicly distinguishes between these, separating them in the implementation has certain advantages. For example, the user can create the boundary and contours in any order, and internally, it will still be easy to distinguish the boundary from the contours.
    • How it might work: Internally, we can create one Shape instance. Then beginShape() can reset the shape's boundary and contours array, and it can set its currentPath to the boundary; beginContour() can push an empty path into the contours array and set currentPath to that. Then, the vertex functions don't need to know what kind of path they're working on; they just push to the current path. Once segments have been pushed to a contour or the boundary, either endContour() or endShape() will loop over them, adding them to a canvas or vertex path; these end functions can also set the currentPath to boundary, in case the user creates the boundary after creating one or more contours.

@sudhanshuv1
Copy link
Contributor

I would also like to work om this issue!

@davepagurek
Copy link
Contributor

Thanks @GregStanton! I took some time to think through the implementation some more and have some more refinement to propose (or at least spelling out some of the separation of concerns more explicitly for implementers.) Ideally p5.Renderer just acts as an adapter between the p5 public API and Shape. Then, when the shape is complete, p5.RendererGL and p5.Renderer2D only need to extract the data from Shape in the format they need.

For easier testing and to keep the code clean, I'd ideally like to keep Shape/Path/PathSegment as decoupled as possible. There's a bit of mixing right now between what the data of these classes contain, and the algorithm of how they get constructed. Both are important, but I'd ideally like to have some invariants so that there's less confusing state management. How do you feel about these as some things to try to keep true during construction/drawing:

  • Each class doesn't need to reach "up" to its parent to get info, it is all provided during construction. This is to make testing easier and so there's less for contributors to learn when fixing a bug at one level
  • The data structure is the same during construction and during drawing (meaning we'd dynamically derive what part we're working on, e.g. with a getCurrentPath() that returns the last path in an array rather than storing it separately, so that there are less states we have to consider)

With that in mind, here are some more thoughts for public interfaces (omitting internal state for now as an implementation detail).

interface Shape {
  // To be called by `p5.Renderer` from public drawing API methods:
  addVertex(v: VertexInfo)
  addCurveVertex(v: VertexInfo)
  addArcVertex(v: VertexInfo, options: ArcVertexOptions)
  addQuadraticVertex(v1: VertexInfo, v2: VertexInfo)
  addBezierVertex(v1: VertexInfo, v2: VertexInfo, v3: VertexInfo)
  startPath()
  endPath()
  closePath()

  // To be called from `p5.RendererGL` and `p5.Renderer2D` to draw to the screen
  toVertices(curveDetail: number): Array<Array<VertexInfo>> // Array of arrays for each contour
  drawToCanvas(ctx: CanvasRenderingContext2D)
}

interface VertexInfo {
  position: p5.Vector

  // WebGL-specific data per vertex or control point:
  texCoord?: [number, number]
  fill?: [number, number, number, number]
}

Not necessary for a proof-of-concept, but just as a heads up, WebGL mode has more info than just position data for vertices, so I've made an interface for everything that might include. I think you can generally substitute VertexInfo everywhere we previously only had p5.Vector so it's not too hard of a switch when implementing things for real.

I've also only opted to include startPath and endPath here, still treating paths/contours uniformly, because while it's potentially easier for people to understand contours as being separate things from the boundary, there actually is no underlying distinction in the data: if you want to draw a venn diagram shape, you need to draw two circles, and it doesn't matter which circle is the "contour" or the "boundary": both paths contribute to both at different times, so there's not really a reason for users to add a boundary later. For that reason I feel like it makes more sense in the renderer implementation to match how the rendering works rather than how the public API is set up. (The public API's insistence of there being one boundary if you have contours can get in the way sometimes, especially when using non-p5 data. I'd love the ability to draw shapes where you only have contours so that if you want to draw a font glyph from its source path information, you don't have to special-case the first contour in the glyph, and can instead treat all contours in the glyph uniformly.)

So then Shape has to make calls to construct the paths, and to "export" them to the canvas or to a vertex array. I imagine that means Shape makes new paths in the start/endPath methods, and otherwise forwards the vertex methods on to the paths themselves.

interface Path {
  addVertex(v: VertexInfo)
  addCurveVertex(v: VertexInfo)
  addArcVertex(v: VertexInfo, options: ArcVertexOptions)
  addQuadraticVertex(v1: VertexInfo, v2: VertexInfo)
  addBezierVertex(v1: VertexInfo, v2: VertexInfo, v3: VertexInfo)
  closePath()

  toVertices(curveDetail: number): Array<VertexInfo> // Just one array now at this level
  drawToCanvas(ctx: CanvasRenderingContext2D)

  firstVertex(): VertexInfo // For segments, if needed
}

The vertex adder methods on Path will do a lot of the heavy lifting of figuring out when to construct new segments and of what type. The idea is that that's Path's responsibility, and then Segment is just responsible for translating that data into actual drawing commands or vertices.

Since I'm just documenting the public API, I've omitted things like the current vertex. The idea is that the path should know its own current vertex (or can be derived by getting the last vertex of its last segment), but nothing externally needs to know that.

Following @GregStanton's idea, if segments don't know their index, paths would create an array containing the first vertex of the first segment, and then calling addToVertexPath on each segment adds everything except the first vertex of the segment to the path since that will be covered by the array already. I've added a startVertex getter to facilitate that.

interface PathSegment {
  type: string
  index: number
  startVertex(): VertexInfo
  endVertex(): VertexInfo
  drawToCanvas(ctx: CanvasRenderingContext2D)
  addToVertexPath(shape: Shape, curveDetail: number)
}

@GregStanton
Copy link
Collaborator Author

GregStanton commented Nov 24, 2023

Replying to #6560 (comment) by @davepagurek:

Thank you for all this work! One of my favorite things about software development is iterating like this. My previous proposal needed to be adjusted to improve encapsulation, and it looks like you've taken the next steps. Instead of state changes coming from anywhere, we now have objects being responsible for modifying themselves. I see how removing internal state from the interfaces (like the data property in my last proposal) expresses this intent more clearly.

Eventually, we may want to specify the interfaces, the internal state for classes that implement them, and as @limzykenneth suggested, the expected call sites for the various methods. That could be a guide for implementers.

But first, I think I'll be able to provide better feedback on the interface details after I have a few questions answered.

Questions

2D vs. 3D

I'm now thinking we really have two shape models:

  • In the 2D case, we have a segment-based model: shape → paths → path segments. The path segments are added directly to the canvas, e.g. with the native quadraticCurveTo() function.
  • In the 3D case, we have a vertex-based model: shape → paths → vertices. The vertices are connected by line segments in the canvas.

If we follow the single-responsibility principle, it seems like these should be implemented in separate classes. People interested in the 2D functionality may think the vertex-related code is relevant to them, when in fact it's not.

Maybe we could specify a common Shape interface that abstracts out anything common to the two cases, and then we could extend it with Shape2D and Shape3D interfaces? I'd need to spend more time to figure out what this would look like, so I figure it's good to float the general idea first.

Did I break it up too much?

Conceptually, our shapes are built from paths, and our paths are built from segments or vertices. It seems nice to organize our code to reflect this. On the other hand, breaking it up means a single function call turns into three. If we want to draw a shape, we now call a drawing method on the shape, and that calls a drawing method on a path, and that calls a drawing method on a segment.

It's not currently clear to me if this is a step backward or a step forward. Maybe we should focus on breaking up the code horizontally (between 2D and 3D cases) rather than vertically (shapes, paths, and path segments)? Is it possible that doing both would mean breaking things up too much? Maybe we could just use a Path type in our specification, rather than a full interface, like you had originally?

Boundary vs. contours

I'd love the ability to draw shapes where you only have contours...

I think you're right about combining "boundary" and "contours." The original idea is to make beginShape()/endShape() as flexible as the native canvas API by supporting composite paths; since p5 already supports moveTo() operations via beginContour()/endContour(), it seems reasonable to take advantage of the full generality there as well.

Do you see any reason why we couldn't treat beginContour()/endContour() as a general way to create the paths that make up a shape, without requiring a separate boundary to be created first? It seems like this could be done without breaking existing sketches.

@davepagurek
Copy link
Contributor

davepagurek commented Nov 25, 2023

If we follow the single-responsibility principle, it seems like these should be implemented in separate classes. People interested in the 2D functionality may think the vertex-related code is relevant to them, when in fact it's not.

Maybe we could specify a common Shape interface that abstracts out anything common to the two cases, and then we could extend it with Shape2D and Shape3D interfaces? I'd need to spend more time to figure out what this would look like, so I figure it's good to float the general idea first.

I wonder if having the ability to get vertices out of paths in general might be something we want once we have a format that makes it easy to do so, to get functionality like getPointAtLength() on SVG path objects to do things like interpolate between paths. That might be overcomplicating things for now though, but maybe to have the ability to have that but also keep the vertex and canvas implementations separate, rather than my original suggestion of having methods for both on each PathSegment, we do something like a visitor pattern that can walk a Path and convert its segments into whatever format the visitor is designed to do? That would mean path data would be public, more like your original idea, and all the segment-type-to-vertex conversions are in one place, and all the segment-type-to-canvas conversions are in another place.

It's not currently clear to me if this is a step backward or a step forward. Maybe we should focus on breaking up the code horizontally (between 2D and 3D cases) rather than vertically (shapes, paths, and path segments)? Is it possible that doing both would mean breaking things up too much? Maybe we could just use a Path type in our specification, rather than a full interface, like you had originally?

That also works as long as we can still break out the common functionality between 2D and 3D mode. I think part of the problem right now is that because the data is handled differently across both modes, we end up with different behaviours between both. I think if the functionality to build up the path is common to both, then that fully addresses that concern for me.

Continuing to think about the visitor pattern sort of idea: if we make these as their own separate objects rather than tying it directly to Renderer2D and RendererGL (the renderers can just initialize the one they want to use), then we can still pretty easily add other operations on paths in the future.

Do you see any reason why we couldn't treat beginContour()/endContour() as a general way to create the paths that make up a shape, without requiring a separate boundary to be created first? It seems like this could be done without breaking existing sketches.

I think that should be ok! It's currently only hard because contours are special cased in the implementation, but if we decide to treat them uniformly internally, then I think we can expose that new functionality without breaking any backwards compatibility.

@GregStanton
Copy link
Collaborator Author

GregStanton commented Nov 28, 2023

Replying to #6560 (comment) by @davepagurek:

Next iteration

I put together a new iteration of the interface to help us move forward. The main changes are a separation of the 2D and 3D cases, the addition of a visitor interface for adding new functionality, and a return to the original idea of public shape data.

To add new vertex types (e.g. smooth quadratic and Bézier vertices), it should only be necessary to (a) create a new user-facing vertex function, (b) create a new segment class in 2D and 3D, and (c) add a new method to any existing visitor classes.

To add a new feature, it should only be necessary to add a new visitor class and decide how the client code will make use of it; a great trial feature might be something akin to SVG's getPointAtLength().

Note: This comment contains significant edits. I'm hoping this latest iteration is better, but it may still need work. If anyone wants to help improve it or else submit an alternative proposal, that would be great!

//--------BASE INTERFACES--------
type Shape = Array<Contour>

interface Contour {
  firstVertex: Vertex
  segments: Array<ContourSegment>
}

interface ContourSegment {
  //has internal access to full contour
  index: number
  getStartVertex(): Vertex
  getEndVertex(): Vertex
  //Catmull-Rom can have an additional extend(vertex: Vertex) method
  accept(visitor: Visitor)
}

type Vertex = Vertex2D | Vertex3D

//--------2D--------
interface ContourSegment2D extends ContourSegment {
  drawToCanvas(context: CanvasRenderingContext2D)
}

type Vertex2D = p5.Vector //alias for consistent naming

//--------3D--------
interface ContourSegment3D extends ContourSegment {
  addToVertexContour(curveDetail: number)
}

interface Vertex3D {
  position: p5.Vector

  // WebGL-specific data per vertex or control point:
  textureCoordinates?: [number, number]
  fill?: [number, number, number, number]
}

//--------VISITOR--------
Interface Visitor {
  //2D
  visitLineSegment2D(line: LineSegment2D)
  visitQuadraticSegment2D(quadratic: QuadraticSegment2D)
  visitBezierSegment2D(bezier: BezierSegment2D)
  visitCurveSegment2D(curve: CurveSegment2D)
  visitArcSegment2D(arc: ArcSegment2D)

  //3D
  visitLineSegment3D(line: LineSegment3D)
  visitQuadraticSegment3D(quadratic: QuadraticSegment3D)
  visitBezierSegment3D(bezier: BezierSegment3D)
  visitCurveSegment3D(curve: CurveSegment3D)
  visitArcSegment3D(arc: ArcSegment3D)
}

Notes

  1. I switched the naming from "path" to "contour" (a) to avoid confusion with the native Path2D, and (b) to be consistent with the p5 naming, since we're now thinking of p5 contours as the general paths that make up a shape.
  2. I'm not sure about all the trade-offs involved with making shape data public. This does reduce the need to forward method calls and also makes it easier to do things like extending curve segments.
  3. Each contour segment typically starts at the endpoint of the previous segment, but as @davepagurek noted, the very first segment has no predecessor. To address this, the first vertex of each contour can be stored separately in firstVertex. We could let vertex() set firstVertex if the current contour is empty and push a line segment otherwise.
  4. Regarding the build process, I think Renderer2D and RendererGL will each have their own versions of the user-facing vertex functions; they'll push 2D and 3D segments into the shape, respectively. Then, the user-facing functions can just call their counterparts on the current renderer.
  5. At some point, I guess we'll want to consider the kind parameter of beginShape(), or other existing features. Once we settle on the interface for the current features, maybe that can be a next step.

Reference

For anyone following along, this seems to be a good introduction to the visitor pattern: https://refactoring.guru/design-patterns/visitor

Update

To be clear, I'm proposing that we make no distinction between vertex functions called directly inside beginShape()/endShape() and those called between beginContour()/endContour(). So, beginShape() starts off the shape with [/* empty contour */], where an empty contour is just a contour whose firstVertex is null and whose segments is []. Similarly, beginContour() either does nothing (if the current contour is empty) or it pushes in a new empty contour.

But... I didn't realize contours in p5 are closed by default. If we require CLOSE to be passed to endContour(), that would bring existing contours in line with the more general definition we're adopting here, and it would also make endContour() and endShape() work similarly. Unfortunately, that would be a breaking change. Oof. Would we be able to merge it?

Another update

I removed the type field from the ContourSegment interface. It was originally used to check if the current contour segment is a curve (a Catmull-Rom spline), so that curveVertex() would know whether to extend it or start a new segment. However, this was a kludge. It's probably better to use instanceof.

@davepagurek
Copy link
Contributor

davepagurek commented Nov 30, 2023

This is looking really promising! Replies to your notes:

4. Regarding the build process, I think Renderer2D and RendererGL will each have their own versions of the user-facing vertex functions; they'll push 2D and 3D segments into the shape, respectively. Then, the user-facing functions can just call their counterparts on the current renderer.

I was thinking about this some more, and it seems like there will always be some step where we have to manually parse the arguments into either 2D or 3D data, because even in WebGL mode, you can still call quadraticVertex(x1, y1, x2, y2) with just the 2D arguments. If we have to do that anyway, how do you feel about each segment being implemented by just one class instead of having 2D/3D variants, with the 3D data just being left undefined in the 2D case? This would save 2D/GL renderers from both implementing the logic to add to the current contour: I'm thinking, if each line segment is always a LineSegment class, then something (the p5.Renderer base class, or maybe a Shape class) can be responsible for taking the user's functions and constructing contour data out of it. Then maybe an abstract method on p5.Renderer called drawShape(completedShape) can be implemented by p5.Renderer2D/GL separately to apply whichever visitor it needs, and that's the only part that's different between the two?

5. At some point, I guess we'll want to consider the kind parameter of beginShape(), or other existing features. Once we settle on the interface for the current features, maybe that can be a next step.

This is a really good point! Currently, primitives like QUADS work kind of like contours: both add a moveTo() internally, either between quads, or between contours themselves. So maybe we need a translation layer between user functions and the segments we create, which in the QUADS case, waits for four vertices to come in, and then emits a new contour into the shape?

If so, then maybe we need something like a command queue, and a strategy for interpreting the queue. Maybe the default strategy always creates new segments, but we swap out the strategy when you change shape mode, so the QUADS strategy does nothing until there are four vertex commands in the queue, and then converts all of them into a new contour. Something like QUAD_STRIP also has a bit of a memory of the previous quad since successive quads share an edge, maybe that still could work if we let the strategy have state? Something like this:

class QuadStripMode {
  constructor() {
    this.lastEdge = null;
  }

  handleCommands(queue, shape) {
    if (this.lastEdge) {
      if (queue.length !== 2) return;
      shape.contours.push([
        this.lastEdge[0],
        this.lastEdge[1],
        queue[1],
        queue[0],
      ]);
      this.lastEdge = [queue.shift(), queue.shift()];
    } else {
      if (queue.length !== 4) return;
      shape.contours.push([queue[0], queue[1], queue[3], queue[2]]);
      queue.shift();
      queue.shift();
      this.lastEdge = [queue.shift(), queue.shift()];
    }
  }
}

Also, curves probably don't make sense when combined with a mode like QUADS, right? We could deal with that by having the strategy for that shape mode throw an error if it gets anything but a vertex command.

One more thought: right now the shape mode is only specified on the whole shape. In theory, this could be different for just one part of a shape (one begin/endContour(), from the user's perspective.) Maybe in the future we could let you add a shape kind as a parameter to beginContour() to temporarily override the whole shape one?

@GregStanton
Copy link
Collaborator Author

GregStanton commented Dec 4, 2023

Replying to #6560 (comment) by @davepagurek:

Thanks! You make a good point about creating a single construction process, and the queueing idea opens up a new possibility. Overall, it sounds like our next job is to figure out how to handle two cases: dimensions (2D or 3D) and kinds (POINTS, LINES, TRIANGLES, QUADS, etc.). I think we may be able to address both at once with a variation of the abstract factory pattern. I'll try to make my thinking clear enough for anyone following along.

Motivation

Let's break it down and consider the dimension problem first. Constructing a shape in 2D and 3D involves nearly the same process, and we'd like to avoid duplicate code. Since the only apparent difference is that different objects are created, we might try a creational design pattern: rather than merging the objects themselves—which leads to inapplicable object properties and extra complexity for 2D developers—we can merge the interfaces through which they're created. As it turns out, this also provides us with a way to handle the kind problem.

Handling dimension

We start by defining 2D and 3D factories. In our case, these are classes whose methods create objects, so the methods have names like createQuadraticSegment(). When beginShape() initializes the shape, it also sets the factory based on whether we're using a 2D or 3D renderer. Then vertex functions like quadraticVertex() call methods like factory.createQuadraticSegment(); segments created in this way are then added to the shape (I'll describe that in more detail soon). Note that the same interface factory.createQuadraticSegment() is used to create objects in 2D and in 3D, as we wanted.

Handling kind

Now let's consider if this will still work when we add POINTS, LINES, TRIANGLES, TRIANGLE_STRIP, etc. To build on the approach we have so far, we can continue thinking of these as different kinds (classes) rather than different modes (algorithms).
Then, just as we'll have classes such as QuadraticSegment2D, we can have classes like TriangleInStrip2D. The question is, how can we define the vertex functions so that they can create these new kinds of shape primitives, without giving each vertex function too many responsibilities?

First let’s consider how many different kinds of shape primitives each vertex function may need to create. The existing non-default kinds only work with vertex(), so right now, that’s the only vertex function that needs to deal with more than one kind. However, it's possible to imagine other combinations of shape kinds and vertex kinds that could be useful in the future.1 So, it'd be nice if our design could accommodate arbitrary combinations without requiring us to change the code inside existing vertex functions.

Now let’s consider how each vertex function might manage to create an unforeseen variety of shape primitives. That will work if the vertex functions can retrieve the right creator method based on the vertex kind and shape kind. We could try to extend our factory classes, but a possibly simpler option might be to replace the factories with Map objects. This way, a key of the form [vertexKind, shapeKind] could be directly paired with a particular creator function; the user-facing vertex functions could then use the map to create the right kind of object, without any hard-coded shapes.

Organizing construction and rendering

Once the vertex functions create the primitives, they need to add them to the shape, and once the shape is complete, it needs to be rendered. For these purposes, we could give each primitive object addToShape() and render() methods. The vertex functions would call addToShape() on the appropriate shape primitive after creating it. After that, if each primitive shape object stores a reference to the shape and rendering context to which it belongs, the user-facing endShape() can call the render() methods directly. The render methods should be able to handle any operations that are specific to the type of primitive (e.g. a moveTo() for Quad2D); this should be true even if the operations rely on state (the shape will be fully constructed by the time an operation like moveTo() is called, and the render methods will have access to the shape). There are some aspects I’m not sure about (see Question 1 below), but I wonder if this could be a really nice way to organize things.

This way, if a user is reading the p5 website’s reference pages and clicks to see the source code, it should be much easier for them to navigate it and to understand the parts they’re interested in. When they look up the source for the user-facing shape and vertex functions, they'll see that these direct the construction and rendering of 2D and 3D shapes from one place, without if…else statements and without forwarding requests to different renderers. Beyond that, if they need to understand the details for a particular shape primitive, they'll find all the code encapsulated within that shape’s own class.

Implementing construction and rendering

I'm assuming for now that we're making the shape accessible to the primitive shape classes. In that case, as soon as the user input is encountered, we can push it through the appropriate addToShape() method and into the shape. In some cases, it will be necessary to use the shape state to determine what to do with the input, but that’s the job of addToShape(). (See Question 2 for a possible alternative based on a command queue.)

For example, a new CurveSegment2D object will have just one vertex in it. Its addToShape() method can check to see if the current contour segment is a curve segment, and if so, it can add the new vertex to that segment.2 If not, it can push the newly created (and still incomplete) curve segment directly into the shape. In a similar way, the render() method will know whether it’s attached to a shape created in 2D or 3D. So, it will know whether it needs to give commands to the 2D rendering context or whether it should produce vertices for the 3D case.

Summary

  1. Separate 2D and 3D vertex objects, in keeping with the single responsibility principle.
  2. Support different kinds of shapes by extending the list of primitive shape classes.
  3. Combine construction and rendering for 2D, 3D, and all kinds in the following way:
    a. Use factories or maps to create the right type of shape primitive at runtime.
    b. Use classes to encapsulate primitive shape data with the construction and rendering details specific to that data.

Questions

  1. I still haven’t learned enough about 3D rendering in p5 with WebGL, WebGL2, and potentially WebGPU. So I’m not exactly sure if a render() method on a common interface for 2D and 3D makes sense. After the necessary vertices are generated for the 3D case, would render() actually be able to render them, or would it need to pass the vertices somewhere else?
  2. Based on the general direction of @davepagurek’s last post, we might replace addToShape() with addToQueue(). Would this be better? This should eliminate the need to keep checking the shape state each time a function like curveVertex() is executed. Since a curve segment can be built from any number of vertices greater than three, we may need to store all the commands for the whole shape somewhere (following the command pattern). Then endShape() could invoke functions that construct and render the shape. This might be a premature optimization. I’d need to think more to understand if this would even work, and what the precise trade-offs might be, but I thought it’d be better to post what I have so far.
  3. Right now, contours are closed by default. This may cause problems with our new approach. If we require CLOSE to be passed to endContour(), that would bring existing contours in line with the more general definition we're adopting, and it would make endContour() and endShape() work similarly. Unfortunately, that would be a breaking change. Would we be able to merge it? I asked this question in an update to an earlier comment, but I’m not sure if everyone noticed the update.
  4. What do you all think about this overall approach? Does it have any significant flaws that I’m overlooking?

Future considerations

Maybe in the future we could let you add a shape kind as a parameter to beginContour() to temporarily override the whole shape one?

Good point! Do you have any use cases in mind? If greater generality can be achieved without increasing complexity of one form or another, then we almost don’t even need to come up with use cases. But if it does, it’d be nice to know some concrete benefits to point to. I have some initial ideas but don’t want this comment to get too long, so I’ll defer this for now. We may also want to consider if certain combinations wouldn’t make sense, and if so, how to handle that.

Once we settle on an overall approach, I guess the next step is to make the necessary adjustments to the previous iterations of the interface that we discussed.

Update: Changed TriangleStrip2D to TriangleInStrip2D and revised some wording in the section "Organizing construction and rendering," for clarity.

Footnotes

  1. Bézier triangles and Bézier quadrilaterals may top the list. Catmull-Rom splines might be extended similarly. If we allow for new types of shapes or vertices, NURBS curves and surfaces might be worthy of consideration. Or, if we're feeling really creative, maybe even spherical triangles or elliptical triangles with arcVertex()? Hmmm... not sure about that one.

  2. Since the new curve segment object is not added to the shape in this case, we could avoid creating a whole object around the vertex, and just push it directly into the existing segment. That's how the original proof of concept works, but that was only possible because the construction logic was included directly inside of curveVertex(), not in a separate class method like addToShape(). However, any memory allocated for the temporary object should be reclaimed after curveVertex() executes, so this doesn't seem like a problem. Also, putting the logic inside an addToShape() method has a major benefit: vertex() isn’t responsible for constructing a long list of primitives (POINTS, LINES, TRIANGLES, etc.). In particular, vertex() won’t need to be changed if a new shape primitive is added.

@davepagurek
Copy link
Contributor

2D and 3D segment interfaces

I’m not exactly sure if a render() method on a common interface for 2D and 3D makes sense. After the necessary vertices are generated for the 3D case, would render() actually be able to render them, or would it need to pass the vertices somewhere else?

Right, I'm not confident that we'd able to have a common render() interface (especially because "rendering" in 3D might mean drawing to the screen, or adding to a p5.Geometry if done within a builder class.) Because WebGL has to implement clipping itself, after all the shape outlines have been created, we have to postprocess them to convert them to triangles that can then be rendered. I think changing that process should be out of scope for this in order to limit the size of the change.

Thinking through the implications of this, I feel like it's conceptually simpler to be able to have methods that convert to different output formats: toVertices(): Array<Vertex> and toContext2D(ctx: CanvasRenderingContext2D): void.

If we had a common render() interface, 2D and 3D things could have separate implementations of the same method. The trouble with having separate methods is that we'd either need all 2D and 3D segments to implement all methods, which would result in duplication between 2D and 3D implementations, or we have to introduce a lot of abstraction all the way up our chain of classes to be able to handle the fact that in some cases we have one method available and in other cases we have another. While separation of concerns is a good goal, in this case I feel like it maybe wouldn't be worth the extra complexity?

I personally am not opposed to having just one dimensional type per segment type, which implements both output formats in one spot. That would look something like:

// Probably will extend a base class that handles some construction stuff later
class LineSegment {
  shape: Shape;
  index: number;
  vertex: Vertex;

  constructor(shape: Shape, index: number, vertex: Vertex) {
    this.shape = shape;
    this.index = index;
    this.vertex = v;
  }

  toVertices() {
    return [this.vertex];
  }

  toContext2D(ctx: CanvasRenderingContext2D) {
    if (this.index === 0) {
      ctx.moveTo(this.vertex.position.x, this.vertex.position.y);
    } else {
      ctx.lineTo(this.vertex.position.x, this.vertex.position.y);
    }
  }
}

Downsides:

  • We can't import just the 2D functionality if you never use anything else because it's baked into the class (although I suppose we could still use a visitor to do toVertices and toContext2D if we want that?)
  • There isn't an explicit separation of 2D and 3D data types

The benefits of this are:

  • less structural complexity, which itself could go a long way at making it easier for newcomers to parse
  • easier access to vertex data in 2D too, for future extensions for something like getPointAtLength and ways to resolve some existing issues with textToPoints
  • makes it harder to ignore 3D. While this could be seen as a negative, it could also be seen as a positive: since the 3D case is right there next to the 2D canvas implementation, it's probably less likely for inconsistencies to get introduced.

Handling kind

I think a map is a good idea. Maybe rather than putting both the vertex and kind in the key, which means having to enumerate a lot of possibilities, we can just have two maps? The kind function could then look up the vertex type in the map if it wants the default behaviour, or override it with custom behaviour (e.g. a POINTS kind could add a special PointSegment given an incoming vertex(), and throw an error for anything else.)

addToShape vs addToQueue

I think defaulting to just using the shape can make sense, and we can keep the queue idea in our back pocket if we find the implementations are getting pretty complex.

Closing contours

I think as long as our internal contour structure doesn't care if it's closed or not, then we can continue making the user-facing endContour() auto-close itself. That way the main path can continue to be implemented internally as a contour, and we don't change the behaviour of existing sketches that use begin/endContour. In the future, we could add an endContour(OPEN) flag if we want, so we haven't permanently closed the door on non-closed contours.

Points

POINTS feels like an outlier currently: in 3D mode they are not processed at all, each is drawn as a particle without any geometry processing. In 2D mode, we implement these as circle paths (so they might actually end up being able to clip things? But this seems like an unintended side effect if they do, and probably not a behaviour we need to preserve.)

Maybe we should implement these as contours with just one PointSegment. Then in our toContext2D() implementation, that can draw a small circle, and in toVertices we can choose to either ignore them (and add a toPoints method to extract just the points), or output a length-1 contour that we can postprocess however we like to treat them separately.

@GregStanton
Copy link
Collaborator Author

GregStanton commented Dec 4, 2023

Hi all!

We're going to have an online video call to hash out an approach that will allow us to accommodate composite paths, contours, the 2D and 3D cases, shape kinds, and visitors; @davepagurek and I will be there, and anyone who is interested is welcome to join!

Date: Thursday, December 7, 2023
Time: 5:00 PM Eastern Time (UTC-5)
Place: BitPaper

The BitPaper platform we're using includes a shared online whiteboard, and it runs in the browser, so no installation is required. You can use the link to visit the meeting room and take a look at the agenda, but the pages are locked until the meeting goes live, so no edits will be possible until then. In case anyone is interested and can't make it, we'll post meeting notes here.

If you're interested, we'd love it if you could comment here, so that we know how many people to expect. But, if you can't let us know in advance, you're still welcome to attend.

Thanks!

Update: When the meeting opens, you may need to sign in to BitPaper. You should be able to join with a Google or Meta account, or with an email and a password. It's free for you to use.

@xanderjl
Copy link

xanderjl commented Dec 4, 2023

I'd love to be a fly on the wall if you'd have me @GregStanton 👀

@GregStanton
Copy link
Collaborator Author

Of course @xanderjl! Looking forward to it. I just updated the meeting announcement with the actual link to the meeting room.

@Gaurav-1306
Copy link
Contributor

i would also i to join the meet.

@GregStanton
Copy link
Collaborator Author

GregStanton commented Dec 7, 2023

Hi all! I put together a new approach based on all the discussion so far, and I updated it with feedback based on the meeting.

Features

This plan supports the following features:

  • composite paths
  • contours
  • 2D and 3D
  • shape kinds (quads, etc.)
  • per-vertex texture coordinates, colors, and normal vectors (WEBGL mode)
  • visitors for shape handling

Goals

We can check any design changes against the following goals:

  • Increase reusability (e.g. by separating construction and rendering concerns)
  • Facilitate navigation (e.g. by making it easy to find the code that draws Béziers)
  • Improve extensibility (e.g. by making it easier to add arcVertex())

Design overview

After beginShape() initializes a shape object, each type of vertex function calls its counterpart on that object. For example, vertex() calls shape.vertex().

Each vertex method attached to shape will create a vertex object and build the appropriate primitive shape around it, depending on the shape kind; the appropriate primitive will be selected using an internal Map object called primitiveShapeCreators. Once a vertex function creates the appropriate primitive, it has the primitive add itself to the shape:

let vertex = createVertex(/* vertex data */);
primitiveShapeCreator = primitiveShapeCreators.get([vertexKind, shapeKind]);
primitiveShape = primitiveShapeCreator(vertex);
primitiveShape.addToShape();

Then endShape() closes the shape if CLOSE is passed, and it calls drawShape() on the current renderer (a 2D or 3D renderer). The drawShape() method loops over the shape’s primitives, letting a visitor produce the output for each one, and then it renders this output. Specifically, here's how drawShape() works in each case:

  • 2D: accepts a PrimitiveToPath2DConverter on each primitive before stroking and filling the output
  • 3D: accepts a PrimitiveToVerticesConverter on each primitive before rendering the output in the usual way

Shape construction: Shape -> Contours -> Primitives -> Vertices

A shape consists of contours (subshapes), which themselves consist of shape primitives. While the shape primitives are the most basic objects that can be rendered, they may be composed of one or more vertices. The outline below indicates how the design will look with the planned API for custom shapes in p5.js 2.0.

interface Shape {
  kind: string // constants.LINES, etc.
  contours: Array<Contour>

  // shape and contour methods
  beginShape(kind: string)
  endShape(mode: string, count: number)
  beginContour(kind: string)
  endContour(mode: string)

  // vertex constructors (these may support an x, y, u?, v? signature too)
  vertex(x: number, y: number, z?: number, u?: number, v?: number)
  bezierVertex(x: number, y: number, z?: number, u?: number, v?: number)
  splineVertex(x: number, y: number, z?: number, u?: number, v?: number)
  arcVertex(x: number, y: number, z?: number, u?: number, v?: number) // new in p5.js 2.0

  // vertex setters
  fill(color: p5.Color)
  stroke(color: p5.Color)
  normal(x: number, y: number, z?: number)
}

interface Contour {
  kind: string // set to shape kind unless a kind is provided to `beginContour()`
  primitives: Array<ShapePrimitive>
}

interface ShapePrimitive {
  accept(visitor: Visitor)
  addToShape(shape: Shape) // processes user input and adds it to shape
}

interface Vertex {
    position: p5.Vector //set based via a vertex function
    textureCoordinates: p5.Vector //set via a vertex function
    normal: p5.Vector // set via normal()
    fill: p5.Color // set via fill()
    stroke: p5.Color // set via stroke()
    toArray(): Array<number> // returns vertex data in a 1D array
    // add lerp() or other interpolation methods, like p5.Vector has?
}

Notes:

  1. Vertex instances contain the same data, in the same format, regardless of the renderer. Individual renderers (including the built-in 2D and WebGL renderers) will decide which of the shape's data to use. This is analogous to an artist who decides to draw a colorful object in black and white.
  2. The Shape class is a slim version of Processing's PShape class. It'd work with only the kind and contours properties, but the extra methods make it possible to completely separate shape construction from shape rendering. The result is a more extensible design. (For example, it'd be easier to add SVG-export functionality. Without rendering a path, it could be stored in a Shape instance and converted to an SVG path.)

Path primitives

Paths are created when the contour kind is PATH, which is the default. Each path consists of a single anchor at the start, followed by one or more segments.

Anchors

An Anchor is the starting point for the segments in a path. It's created when a contour of the PATH kind is empty (the contour kind field will be implemented with a getter that will return EMPTY_PATH in this case).

  • Interface: ShapePrimitive
  • Class: Anchor

In the typical case, a path segment will begin at the end of the preceding segment. However, the very first segment lacks a predecessor; the anchor fills this role. (For example, PrimitiveToContext2DConverter would convert the anchor to a moveTo() command to ensure that the path starts from the right place.)

Segments

Segments are equivalent to subpaths in the native canvas API and SVG.

  • Interface: Segment
  • Classes: LineSegment, BezierSegment, etc.
interface Segment extends ShapePrimitive {
  index: number // used to get start vertex from preceding segment
  getStartVertex(): Vertex
  getEndVertex(): Vertex
}

Isolated primitives

Isolated primitives aren't connected to their neighbors. They're created when the contour kind is POINTS, LINES, TRIANGLES, or QUADS.

  • Interface: ShapePrimitive
  • Classes: Point, Line, Triangle, Quad

Since isolated primitives aren't connected to their neighbors, these primitive classes are self contained. For example, Line directly contains two vertices, instead of pulling its start vertex from its predecessor. This distinguishes Line from LineSegment.

Tessellation primitives

Tessellations consist of shapes that cover a region in 2D or a surface in 3D, with no overlaps and no gaps. They're created when the contour kind is TRIANGLE_FAN, TRIANGLE_STRIP, or QUAD_STRIP.

  • Interface: ShapePrimitive
  • Classes: TriangleFan, TriangleStrip, and QuadStrip

Note that the classes have names like TriangleStrip instead of TriangleInStrip; in other words, the primitive is the entire strip, rather than a single triangle within a strip.

Future primitives

There are other combinations of vertex kinds and shape kinds that lead to meaningful primitives. For example, in a contour with a kind set to TRIANGLES, the bezierVertex() function could produce a Bézier triangle, represented by a BezierTriangle primitive. These could potentially be implemented in the future.

Shape handling: Visitor classes

A primitive visitor can visit different primitives and use their data, for example to produce native Canvas API drawing commands. Putting such functionality inside a separate visitor class means that a 3D primitive doesn't require a method for the 2D Canvas API, for example. Visitor classes are also independent of the renderer, which makes it easier to add new features that could potentially work with multiple renderers. A few examples of classes that might implement the PrimitiveVisitor interface are below.

Class: PrimitiveToPath2DConverter

  • Description: Used to add a 2D shape primitive to the current canvas path.
  • Note: For isolated primitives, this includes a moveTo() command.

Class: PrimitiveToVerticesConverter

  • Description: Used for converting shape primitives to an array of 2D or 3D vertices.
  • Note: Could be used for multiple purposes, including generating vertices for 3D rendering.

Class: PointAtLengthGetter

  • Description: Gets point along a segment at a given length.
  • Note: Constructor takes the given length as input?
Interface PrimitiveVisitor {
  // segment primitives
  visitLineSegment(lineSegment: LineSegment)
  visitQuadraticSegment(quadraticSegment: QuadraticSegment)
  visitBezierSegment(bezierSegment: BezierSegment)
  visitCurveSegment(curveSegment: CurveSegment)

  // isolated primitives
  visitPoint(point: Point)
  visitLine(line: Line)
  visitTriangle(triangle: Triangle)
  visitQuad(quad: Quad)

  // tessellation primitives
  visitTriangleFan(triangleFan: TriangleFan)
  visitTriangleStrip(triangleStrip: TriangleStrip)
  visitQuadStrip(quadStrip: QuadStrip)
}

Note: If a particular visitor class doesn't apply to a certain kind of primitive, it could either omit the method corresponding to that class, or it could throw an error.

Updates:

  1. Made refinements summarizing changes discussed on Discord and elsewhere. These include new Vertex and Anchor classes.
  2. Removed the ambientDimension field from the Shape interface (the new Vertex class will store the same data, in the same format, regardless of the ambient dimension; renderers will decide which data to use by calling a suitable visitor).
  3. Added shape construction methods to Shape class and adjusted the rest of the design accordingly. (Originally, the Shape interface only had kind and contours properties, and no methods other than the constructor.)

@davepagurek
Copy link
Contributor

That update looks great @GregStanton, it looks like it will handle all the situations we've though of so far!

On the call today we can go over this all again and answer any questions and clarifications other contributors might have.

@GregStanton
Copy link
Collaborator Author

Updates from the meeting on December 7, 2023:

Thanks so much to everyone for attending! During the meeting, we adjusted the latest approach and cleared up our main questions. I went ahead and incorporated the adjustments into my previous comment. It looks like we're ready to go ahead with the approach described there. I'll summarize the key points below. I'm filling in some details myself, so if anyone notices any errors, please let me know!

Summary of questions

  • Question 1:

    • How will we handle shapes that require multiple calls to a vertex function before they're complete?
    • Once a vertex function creates the right kind of primitive, it will ask it to add itself to the shape by calling its addToShape() method. This method will check the shape to see if the current primitive is of the same type; depending on what it finds, it will either extend it (by modifying its data) or it will add in the new (incomplete) primitive object. By the time endShape() is called and rendering occurs, the primitives will all be complete.
  • Question 2:

    • The current interface only addresses the pre-rendering steps: PrimitiveToContext2DConverter may just add paths to the current path without stroking or filling them, and PrimitiveToVerticesConverter only outputs an array of vertices without rendering them. How will the rendering happen?
    • The user-facing endShape() method can close the path and then call drawShape(shape) on the current renderer. The drawShape() methods will loop over the shape's primitives, produce output for rendering, and then do the actual rendering. In the 2D case, PrimitiveToContext2DConverter outputs path commands, and the path is then stroked and filled. In the 3D case, PrimitiveToVerticesConverter outputs vertices that include all necessary rendering data, and the rendering then happens according to the usual process.
  • Question 3:

    • How will we want to handle normal() and vertexNormal()?
    • These can push their data into the corresponding vertex array, since we're representing vertices as arrays of numbers.
  • Question 4:

    • How likely is it that after this refactoring, we'll be able to add features that will support use cases such as reproducing and animating SVG paths (e.g. to animate the drawing of individual glyphs in typeset mathematics) or to morph one shape into another? (For example, Manim is a Python library that supports both of these features.)
    • This seems doable! The ability to reproduce SVG paths, together with a feature like getPointAtLength(), should make these things possible. Some JavaScript code is already sitting around that we could use once we have the foundation in place.

Summary of decisions

It looks like we're going to go ahead with this interface with a few adjustments.

Adjustment 1: Based on the discussion, it was determined that the tessellation primitives can implement the PrimitiveShape interface directly, without the need for an extendByVertex() method. The CurveSegment primitive will not need this type of method either. The idea is that we'll allow primitives of the same type to have write access to each other's data. I went ahead and edited the proposal above to include these changes.

Adjustment 2: In the 2D case, we may want to output commands to the Path2D interface rather than the rendering context. This may allow us to issue the basic rendering commands and any clipping commands using the same kinds of paths, since the native canvas API's clip() method accepts a Path2D object.

Next steps

Will it be helpful to break up the refactoring into smaller tasks, and then possibly assign those to individual contributors? Or would it make sense for someone to take on the whole thing? Other than this project management aspect, it seems that we're ready to start on the implementation. After that, since it will be a big change to the codebase, we'd like to use the new visual testing system (once it's ready), to prevent regressions.

@capGoblin
Copy link
Contributor

Hi everyone,

Apologies for the delay in updates. Here's the progress so far:

I've implemented the vertex functions for both 2D and 3D, along with visitors and contours, and initiated work on shapekinds.

If possible I'll aim to complete shape kinds and create a PR for comprehensive review or create one with whatever done so far, soon.

@GregStanton
Copy link
Collaborator Author

Hi @capGoblin! No problem at all, and thanks for all your work! Did you see the update in the body of this issue about the new approach, the task list, and the new branch?

@GregStanton
Copy link
Collaborator Author

@GregStanton I've given you triage access to this repo.

Thanks @limzykenneth!

@limzykenneth
Copy link
Member

@GregStanton Not sure what point you are at with this feature but I'm thinking of moving this to be a proposal in the 2.0 RFC to give it a bit more flexibility in terms of not having to consider breaking changes as much. Let me know if that sounds good to you, you can summarize what you have here as the proposal and no need to write something from scratch.

@GregStanton
Copy link
Collaborator Author

Thanks @limzykenneth! There's been some good progress so far. I've sorted out a number of the smaller tasks in the task list, and we've worked out the overall implementation. However, there are some major updates that I'd like to make, in light of the move to p5.js 2.0.

  1. I proposed a significant API change in a related issue, based on an idea from @davepagurek.
  2. I worked out initial ideas for a minimal p5.Shape class that would expose some of the essential functionality we've planned. It would also make the design more extensible for add-on libraries. Dealing with this now would be good because there may be some consistency issues with p5.Geometry to deal with.

In short, I think this issue definitely satisfies our criteria for the 2.0 proposals, but I need to figure out how to incorporate the two updates above. Also, the comments on the current issue contain a long brainstorming discussion that led to the implementation design, but newcomers shouldn't need to read through all that.

So, it might make sense for me to submit a clean, updated issue using the new 2.0 template. (I could tag everyone who was originally interested in this issue and link back to it for context.) Do you mind if I spend some time this week to figure out a good approach?

@GregStanton
Copy link
Collaborator Author

Okay, I'm ready for this issue to be listed as an RFC proposal, @limzykenneth. I think this should be separate from the proposal for the new vertex API, since the refactoring could be done for either API.

@GregStanton
Copy link
Collaborator Author

Some thoughts about an explicit name for the default shape kind in 2D and 3D are below. Any feedback would be great!

Current situation:
It looks like TESS is the explicit name for the default shape kind in 3D mode, but there is no explicit name for the default shape kind in 2D mode (it's set to null). The reference doesn't specify the default in either case, and the explanation for TESS seems unclear.

Solution?
We'll want to address this as part of the refactoring, in accordance with proposal #6679. That proposal mentions SEGMENTS as a possible name for the default shape kind in 2D and 3D (we would eliminate TESS). However, I'm now thinking this name doesn't quite work. How about SEGMENT_CHAIN? Although CHAIN would be more concise, I'd probably go with the more precise term, especially since it fits well with existing names like TRIANGLE_FAN, etc.

Here's why I'm proposing SEGMENT_CHAIN in place of SEGMENTS:

  • In the refactoring plan, we're calling the default shape classes "segments." However, SEGMENTS is problematic: the existing plural shape kinds (POINTS, LINES, TRIANGLES, QUADS) are all disconnected. So, SEGMENTS suggests a set of disconnected segments.
  • SEGMENT would likely cause confusion since shapes of this kind won’t usually consist of a single segment.
  • SPLINE would make sense. This term usually refers to polynomials that are pieced together. Arcs aren’t polynomials (eventually we'll include arcVertex()), but there is such a thing as non-polynomial splines. The problem is that we’re already planning to use this term for splineVertex() (see #6766).

@davepagurek
Copy link
Contributor

@jules-kris, you're looking into explicit names for default values right now, let me know if you have any feedback on the name for this one!

I think SEGMENT_CHAIN is a good working title while we think, and we can switch the name if we think of something else. I guess in the default mode, you're defining a shape by creating the contours or tracing the outlines of shapes? both "contour" and "outline" are kind of overloaded terms though, hmm. Silhouette might be a less overloaded kind-of-synonym? And some of the other modes also do that, e.g. TRIANGLES or QUADS -- maybe that's ok though and those can be thought of as special cases for when you want to draw the silhouettes of a bunch of the same kind of shape in a row?

@GregStanton
Copy link
Collaborator Author

@davepagurek: Thanks Dave! I appreciate the brainstorm!

My initial feeling about "outline" and "silhouette" is that these both seem to refer to marking the boundary of a 2D shape, which is typically a closed curve. To be fair, beginShape() does use "shape" to include non-closed curves... But I'm not sure, especially if we want the same default for both 2D and 3D. For example, we might draw a helix in 3D mode; it's hard for me to see that as an outline or a silhouette. Also, silhouette is hard to spell.

But it's great to come up with lots of ideas. The more the better. I'll let your ideas percolate in my brain and see if they inspire something new!

@GregStanton
Copy link
Collaborator Author

GregStanton commented Oct 29, 2024

Oh wait. How about PATH? Our usage would be similar to usage in related contexts. This term generally refers to any curve (open or closed), potentially made of multiple pieces (linear or nonlinear). It also doesn't seem to conflict with anything else in p5.

Paths in related contexts:

  • Quartz2D: A path defines one or more shapes, or subpaths. A subpath can consist of straight lines, curves, or both. It can be open or closed.
  • SVG paths: A path represents the outline of a shape which can be filled or stroked... A path is described using the concept of a current point. In an analogy with drawing on paper, the current point can be thought of as the location of the pen. The position of the pen can be changed, and the outline of a shape (open or closed) can be traced by dragging the pen in either straight lines or curves.
  • HTML Canvas: A path is a list of points, connected by segments of lines that can be of different shapes, curved or not, of different width and of different color. A path, or even a subpath, can be closed.
  • Path (topology): "Path" also has a similar meaning in mathematics. In particular, paths can be pieced together using path composition.

Does anyone see any downsides to this? Perhaps we'd want to use "path" to mean something else in the future? I doubt that's a concern, though. The default shape kind is the closest thing in p5 to the concept of a path. In fact, I just noticed that I referred to these default shapes as "composite paths" in the title of this issue haha. I also previously described our concept of segments as "equivalent to subpaths in the native canvas API and SVG." Instead of classes for LineSegment, etc. we could have LinePath etc. (This is distinct from the Line class.)

@davepagurek
Copy link
Contributor

Path sounds pretty good to me, and it's definitely easier to spell! I suppose its a little overloaded in the graphics space because e.g. in SVGs and in the native canvas context, they have objects called paths rather than modes, but p5 doesn't have any objects or other functions using that name, and this shape mode works pretty similarly to the SVG/native canvas path drawing APIs.

@GregStanton
Copy link
Collaborator Author

Thanks @davepagurek! Good point about objects vs. modes. Hopefully that doesn't cause too much confusion. For what it's worth, we will have path objects under the hood (LinePath, BezierPath, SplinePath), although I don't expect we'll expose those to the user, at least not for now.

@GregStanton
Copy link
Collaborator Author

One more idea for the name of the default shape kind:

If a future version of p5 were to allow all shape kinds to work with all vertices, then we'd be forced to interpret LINES more generously. In some contexts, "line" is indeed used to mean "curve" (we have more precise terms "rectilinear" and "curvilinear" for this reason). Then LINE might work even better than PATH as the name of the default shape kind. I thought I should add this observation here, for the record, but I'm not sure yet which name is better.

@GregStanton
Copy link
Collaborator Author

GregStanton commented Nov 1, 2024

Update: After a quick real-time discussion with @davepagurek , it looks like PATH and LINE are our top candidates for the name of the default shape kind. We put a little thought into this, since users may sometimes need to explicitly name the default shape kind, due to the new way contours will work.

@davepagurek: After writing down our observations (see below), I'm now leaning toward PATH rather than LINE. Do you have a preference?

@jules-kris: If you have any feedback, that would be great! If you want to skip the foregoing discussion and just focus on this comment, it should summarize the important things.

General criteria:
It'd be nice to fulfill as many of the criteria below as we can (I also adapted these into a general list of criteria for #6679).

  1. Descriptiveness
    • Implies straight, curved, or composite
    • Implies open or closed
    • Implies planar (fits in a 2D sketch) or non-planar (requires a 3D sketch)
  2. Consistency
    • Is free of conflicts with names of other shape kinds
    • Is free of conflicts with p5 API in general
    • Is free of conflicts with other APIs (mainly the Canvas and SVG APIs)
    • Is free of conflicts in related contexts (art, math, ...)
  3. Simplicity
    • Familiar to beginners
    • Easy to spell
    • Concise
  4. Durability (in the future, its scores on the other criteria will not be reduced)

Assessment of PATH:

  • Descriptiveness: Excellent
  • Consistency: Good
  • Simplicity: Excellent
  • Durability: Good

Notes:

  1. The consistency would be excellent, except that Canvas and SVG paths are user-facing objects, rather than modes. For now at least, p5's path objects will just be internal. (Specifically, a path would be implemented as a contour object with the PATH kind.)
  2. The durability would be excellent, except that the name LINE would be more consistent with LINES in the future, if p5 were to allow all shape kinds to work with all vertex types.

Assessment of LINE:

  • Descriptiveness: Average
  • Consistency: Average
  • Simplicity: Excellent
  • Durability: Excellent

Notes:

  1. Descriptiveness isn't "Excellent" since "line" is often used to specifically mean a straight line, even though "line" is used to refer to straight or curved lines in some contexts, including in math and in art. As @davepagurek noted in conversation, it also may imply that it makes only stroked shapes, as in line art.
  2. Consistency is "Average" since LINE conflicts with the meaning of line(), and since it conflicts with internal class names (at least currently, and resolving these conflicts might require longer class names).
  3. Durability is "Excellent" rather than "Good" because it would fit well with the existing LINES name if all shape kinds eventually work with all vertex types.

Plan for now:
I'll use PATH for now. We can potentially change this, up until the 2.0 release.

@golanlevin
Copy link

I was about to file a new issue, then found this one.
As you are aware, mixed vertex types are not playing well together. I'm currently working on an SVG-export library specifically for pen-plotting, to which this issue is directly relevant. It seems like y'all are far ahead on thinking about this, so I'm just parking this here as an illustration of the current state of things (v.1.11.0) for mixed vertex types. Thanks for all of your effort on this.

problems-with-mixed-vertices

A live version of this demo is here:
https://editor.p5js.org/golan/sketches/gGBFMTvaI

let correctImage;

function setup() {
  createCanvas(800, 500);
  correctImage = loadImage("correct.png"); 
}

function draw() {
  background(220);
  
  text("Mixed vertex types don't play well together:", 75, 70); 
  
  push();
  translate(0,0); 
  text("Bezier only", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  bezierVertex(100,100, 110,130, 125,130);
  bezierVertex(140,130, 150,100, 170,100);
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  // quadraticVertex(125,140, 100,195); // commented out
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 
  
  push();
  translate(150,0); 
  text("Quadratic only", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  //bezierVertex(100,100, 110,130, 125,130); // commented out
  //bezierVertex(140,130, 150,100, 170,100); // commented out
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  quadraticVertex(125,140, 100,195); 
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 

  push();
  translate(300,0); 
  text("Bezier + Quadratic (Wrong)", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  bezierVertex(100,100, 110,130, 125,130);
  bezierVertex(140,130, 150,100, 170,100);
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  quadraticVertex(125,140, 100,195); 
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 
  
  push();
  translate(500,0); 
  text("Bezier + Quadratic (Correct)", 75, 90);
  image(correctImage, 75,100); 
  pop(); 
  
  
  
  text("Adding a curveVertex alters the entire shape, but shouldn't:", 75, 270); 
  
  push();
  translate(0,200); 
  text("2 vertexes on left", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  vertex( 60, 160); // regular vertex added
  vertex( 90, 140); // regular vertex added
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 
  
  push();
  translate(150,200); 
  text("2 curveVertexes on left instead (Wrong)", 75, 90); 
  beginShape();
  vertex( 75, 100); 
  vertex( 80, 100); 
  vertex(175, 100);
  vertex(175, 125);
  vertex(200, 150);
  vertex(175, 175);
  vertex(175, 200);
  vertex(150, 205);
  vertex(100, 195);
  vertex( 75, 200);
  vertex( 75, 175); 
  curveVertex( 60, 160); // curveVertex added instead
  curveVertex( 90, 140); // curveVertex added instead
  vertex( 75, 125); 
  endShape(CLOSE); 
  pop(); 
}

@GregStanton
Copy link
Collaborator Author

Thanks for sharing @golanlevin! It's great to hear that this project may help with the library you're working on! Would you be interested in doing some user testing with your library once everything is ready? If so, you may want to check out the new vertex-function API for p5.js 2.0, since it looks like we're going to be switching to that.

@golanlevin
Copy link

@GregStanton Thanks for pointing me to the new vertex API for p5 v.2!
The SVG-export library I'm making is specifically intended for p5.js, so this is directly relevant to me. I appreciate that there will be some breaking changes, and I'll check the roadmap closely!

@GregStanton
Copy link
Collaborator Author

@golanlevin Sure thing! I'm glad that was helpful :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Implementation
Development

No branches or pull requests

10 participants