Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Rhai playground (using WebAssembly) #169

Closed
2 of 11 tasks
alvinhochun opened this issue Jun 21, 2020 · 75 comments
Closed
2 of 11 tasks

Rhai playground (using WebAssembly) #169

alvinhochun opened this issue Jun 21, 2020 · 75 comments
Assignees

Comments

@alvinhochun
Copy link
Contributor

alvinhochun commented Jun 21, 2020

The idea is to make a playground for Rhai scripts that runs in a web browser which can showcase the features of the Rhai scripting language. It might also be possible for others to repurpose it as a Rhai script editor.

I've started an attempt on https://github.com/alvinhochun/rhai-playground, but I am not making any promises.
The master branch gets automatically built and deployed to: https://alvinhochun.github.io/rhai-playground-unstable/
I might irregularly upload specific builds to: https://alvinhochun.github.io/rhai-demo/

Wish list (not a roadmap or to-do list):

  • Ability to run Rhai scripts
  • Proper syntax handling for Rhai scripts
  • Real-time AST compilation with error reporting (already works though pretty limited)
  • Integration with the book (embedding is now possible)
  • REPL
  • Autocomplete list for built-in packages
  • Provide more fancy IDE-like features
  • Engine settings
  • Custom modules
  • Heuristically guess variable types?
  • Debugging
@schungx
Copy link
Collaborator

schungx commented Jun 21, 2020

Added an entry in #100

@schungx
Copy link
Collaborator

schungx commented Jun 21, 2020

Real-time AST compilation with error reporting

This may be quite possible because the parser is fast enough. If you trottle the parsing during 500ms pauses or so it'll probably work fine...

However, the parser currently bugs out at the first error, which may not be perfect. To have a good experience, we really need reasonable error recovery, as per #119

Autocomplete list for built-in packages

This may not be easy as Rhai is dynamic, so there is no type information. It is difficult to know how to filter the list.

@alvinhochun
Copy link
Contributor Author

Autocomplete list for built-in packages

This may not be easy as Rhai is dynamic, so there is no type information. It is difficult to know how to filter the list.

It will just be unfiltered (i.e. only filtered based on whatever the user already typed in) to start with. If it ever gets advanced enough to be able to heuristically guess the data types (which seems very unlikely) then perhaps it can be developed further. Another idea could be type annotation comments but I don't aim to discuss this...


I realized that it might be easier to code the playground if it can have direct access to the AST content, but it looks like a lot of it is currently inaccessible from the outside. I might also end up wanting to use some other innards or Rhai to implement other functions. That means I might have to maintain a fork for playground-specific use and the playground repo will stay as a separate repository. Do you have a better idea?

@schungx
Copy link
Collaborator

schungx commented Jun 23, 2020

I can open up AST including the Expr, Stmt and Token types. Never did realize that users will want to know the inside details, so I kept them private before in order not to break code when I change the implementation.

Or do you think hiding a pub behind a feature gate?

@schungx
Copy link
Collaborator

schungx commented Jun 23, 2020

@alvinhochun if you'd pull from my fork instead: https://github.com/schungx/rhai

The latest version has a new feature internals that exposes the internal data structures of the AST.

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jun 23, 2020 via email

@schungx
Copy link
Collaborator

schungx commented Jun 23, 2020

You mean a rhai_core and a rhai that only re-exports the common API?

@alvinhochun
Copy link
Contributor Author

How about splitting the parser and AST stuff to a rhai_ast crate?

@schungx
Copy link
Collaborator

schungx commented Jun 23, 2020

How about splitting the parser and AST stuff to a rhai_ast crate?

Hhhmmm... that probably should work, but I'd hesitate to split a simple proj like Rhai into two yet even simpler crates. Unless there is an overwhelming reason...

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jun 23, 2020

Maybe it'll just be simpler for me to maintain a fork for the playground.

@schungx
Copy link
Collaborator

schungx commented Jun 23, 2020

Maybe it'll just be simpler for me to maintain a fork for the playground.

You don't have to. Just turn on features = [ "internals" ] and you basically get rhai_ast there.

I fully intend to merge this feature into master a bit later.

@alvinhochun
Copy link
Contributor Author

I experimented on whether I can reuse the existing Rhai tokenizer code for syntax highlighting, turns out it takes quite some modifications.

This is the modified code that "works" (if you diff it against the original code snippets you might be able to tell how it was changed):
https://github.com/alvinhochun/rhai-playground/blob/184d88e6fb86e18fc525cd24233b77d1898bfa6c/src/cm_rhai_mode/token.rs

I also uploaded a build with this new syntax highlighting. (Compare with the previous Rust highlighting)

The main difference is that, CodeMirror (the editor I'm using) only gives one line to the tokenizer at a time. It also caches the tokenizer state per line so that it can restart the tokenization from any line. This means I had to change how block comments are handled. (I am also surprised to see that Rhai doesn't support multi-line strings...)

How do you think about refactoring the tokenizer in Rhai to allow the code to be reused? I'm thinking of splitting the "streaming" part in TokenIterator into a separate trait so I can make an adapter for the CodeMirror stream, and also somehow make it handle per-line tokenization. (Though I am also wondering if I can use the actual AST for syntax highlighting.)

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

Let me diff it and have a look. Optimally, we'd like one code base that can have multiple uses. The tokenizer is stable enough (i.e. not much changes) that we can experiment.

I'm not familiar with CodeMirror myself... can you list out a few aspects in tokens.rs the needs changing in order to cater for your uses?

Off hand I can see the need to abstract out the Peekable<Chars> stream so it can be used with different other input streams that yield char...

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

This means I had to change how block comments are handled. (I am also surprised to see that Rhai doesn't support multi-line strings...)

Yes, it wasn't hard to do but it burdens the scripting language on another obscure syntax. There hasn't been any call for it yet...

So basically we need a new state that is returned together with the token indicating whether the parsing stops in the middle of a multi-line comment or is in valid text. I see you already have such an enum...

And your idea of splitting off the parse state from the parser should work well. I'll start looking into the refactoring and give you a trail version in a bit.

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jun 26, 2020

can you list out a few aspects in tokens.rs the needs changing in order to cater for your uses?

  1. Each stream has only one line, therefore end of stream == end of line and the trailing '\n' is not included.
  2. When a new line starts, it passes in the stream of the new line, and the state object at the end of the previous line.
  3. The tokenizer should not need to check for EOF (end of line is all it should care).
  4. The stream tracks column position internally.
  5. Blank lines are not tokenized. (The tokenizer can optionally be informed of blank lines and mutate the state, but I don't think we need this.)
  6. Line/block comments also need to be tokenized, instead of being skipped.
  7. For syntax highlighting, I don't need to get the actual value of the literals so I bypassed some of those (can it be made optional with some trait magic?)
  8. I might also want to change the handling of string literals a little bit. Currently it is not possible to highlight escape sequences in another style. Also, if you try the current version and put in some invalid escape sequences the highlighting will sort of break apart, because the tokenizer stops as soon as the invalid escape sequence is hit.

Here is the API of the CodeMirror stream if you want to see it (and here is the binding in Rust).

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

  • Each stream has only one line, therefore end of stream == end of line and the trailing '\n' is not included.

It doesn't really matter for the parser. At the end of the stream, the tokenizer will start outputting EOF indefinitely. If the stream is only one single line, it doesn't hurt the tokenizer a single bit. The line number will always be 1.

  • When a new line starts, it passes in the stream of the new line, and the state object at the end of the previous line.

Understood. Some way to keep state to make sure that the tokenizer knows it is starting from a multi-line comment. All other tokens fit on one single line only with no exceptions... maybe we'll also handle the case of multi-line strings with the same mechanism.

  • The tokenizer should not need to check for EOF (end of line is all it should care).

Yes. EOF is the tokenizer's way of saying "no more data". I can return None instead, but Some(EOF) is easier for me to use in the parser. I can toggle a feature for you to switch it to return None.

  • The stream tracks column position internally.

Fine.

  • Blank lines are not tokenized. (The tokenizer can optionally be informed of blank lines and mutate the state, but I don't think we need this.)

Whitespaces are skipped during tokenizing anyway. But we need to keep it for multi-line comments and strings (in the future).

  • Line/block comments also need to be tokenized, instead of being skipped.

So I have a Token::Comment which I'd simply skip in the parser, or hide it behind a feature gate.

  • For syntax highlighting, I don't need to get the actual value of the literals so I bypassed some of those (can it be made optional with some trait magic?)

Right now, the tokenizer only tracks the starting position of the token. Do you need its length or the ending position of the token as well?

Why not keep the literals. I don't think they hurt...

  • I might also want to change the handling of string literals a little bit. Currently it is not possible to highlight escape sequences in another style. Also, if you try the current version and put in some invalid escape sequences the highlighting will sort of break apart, because the tokenizer stops as soon as the invalid escape sequence is hit.

For string/character literals, maybe I also include a mapping table of byte-range -> character position?

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jun 26, 2020

Yes. EOF is the tokenizer's way of saying "no more data". I can return None instead, but Some(EOF) is easier for me to use in the parser. I can toggle a feature for you to switch it to return None.

This is not needed, in fact CodeMirror will not call the tokenizer with a stream at its ending position. I expect to never get an EOF.

Right now, the tokenizer only tracks the starting position of the token. Do you need its length or the ending position of the token as well?

Sorry, I did not explain this clearly. The CodeMirror tokenize process works like this:

  • CodeMirror calls the tokenizer with a state object (which my code provided) and the stream of the line.
  • The tokenizer consumes one token from the stream (which could be one or more characters), mutates the state if needed and returns the type of the token. (This is what my code does.)
  • CodeMirror sees how many characters were consumed, then marks the line:column range as the returned token type for highlighting.
  • CodeMirror calls the tokenizer again, with the stream position now at the next unconsumed character (then repeats the last two steps), until the whole line is consumed.
  • The above steps are repeated with the following line, until all lines are consumed.

This is what I meant by "stream tracks column position internally". The position information is external to the tokenizer so it doesn't need to do any tracking.

Why not keep the literals. I don't think they hurt...

Extracting the literals is a little bit of extra work, but I guess it's fine.

For string/character literals, maybe I also include a mapping table of byte-range -> character position?

This won't really work with CodeMirror's tokenization process. Perhaps I can try with an example of what it would need:

Initial:
"Hello\nworld"
^--- stream position
state: { in_str_literal: false }

call #1:
"Hello\nworld"
      ^--- stream position (after)
state:    { in_str_literal: true }
consumed: "Hello
token:    string literal

call #2:
"Hello\nworld"
        ^--- stream position (after)
state:    { in_str_literal: true }
consumed: \n
token:    escape sequence

call #3:
"Hello\nworld"
              ^--- stream position (after)
state:    { in_str_literal: false }
consumed: world"
token:    string literal

It's just something nice to have, but if you think it is too complicated to be added to the built-in tokenizer you can leave it out and I'll see if it can be tacked on.

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

This is not needed, in fact CodeMirror will not call the tokenizer with a stream at its ending position. I expect to never get an EOF.

Yes it will, if there are only white-space till the end. The tokenizer will not find anything and will return EOF to say that it doesn't find any token.

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

You can take a look at this branch: https://github.com/schungx/rhai/tree/tokenizer

The get_next_token function should be what you need. Just ignore the Position returned if you're tracking position yourself.

You need to implement the InputStream trait.

States are kept in the type TokenizeState.

Multi-level nested comments are supported, automatically handled at the beginning of the next line - in fact, the TokenizeState stores the current nesting level and get_next_token will scan till this level drops to zero before resuming normal tokenization.

@alvinhochun
Copy link
Contributor Author

This is not needed, in fact CodeMirror will not call the tokenizer with a stream at its ending position. I expect to never get an EOF.

Yes it will, if there are only white-space till the end. The tokenizer will not find anything and will return EOF to say that it doesn't find any token.

You are right. I guess I didn't notice it because I didn't actually try making it a hard error.

You can take a look at this branch: https://github.com/schungx/rhai/tree/tokenizer

The get_next_token function should be what you need. Just ignore the Position returned if you're tracking position yourself.

You need to implement the InputStream trait.

States are kept in the type TokenizeState.

Multi-level nested comments are supported, automatically handled at the beginning of the next line - in fact, the TokenizeState stores the current nesting level and get_next_token will scan till this level drops to zero before resuming normal tokenization.

Thanks for the refactor, it is almost what I needed but there are some issues:

  • I can't set TokenizeState::include_comments as it's private and there isn't a constructor.
  • TokenizeState::can_be_unary isn't updated at the end of get_next_token. Can you wrap it in a wrapper function that does it?
  • It seems that the initial value of TokenizeState::can_be_unary needs to be true, but #[derive(Default)] would set it to false.

@alvinhochun
Copy link
Contributor Author

On an unrelated note, I would like to be able to list and inspect the script-defined functions inside the AST.

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

OK. Done.

can_be_unary is now non_unary which defaults to false. :-)

@alvinhochun
Copy link
Contributor Author

OK. Done.

can_be_unary is now non_unary which defaults to false. :-)

Looks like you forgot to mark the new get_next_token as pub.

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

OK fixed!

Module::iter_script_fn is added.

@schungx
Copy link
Collaborator

schungx commented Jun 26, 2020

* Provide more fancy IDE-like features

For IDE, I think it might be easier just to write standard language server protocol plugins for the Rhai syntax, so that it can be used with VS code, Eclipse, etc.

I remember reading the TextMate grammar and it is extremely complicated... I wonder if there is something that can generate at least a skeleton based on some C-like language...

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jun 26, 2020

I don't really intend to implement a full IDE on the playground, that'd be crazy (it is a "wish list" for a reason). I don't have experience with writing LSP servers and I'm not too interested for now.

As for the playground, the current build seems functional enough. What other things would you want for a first release? I would want some example scripts to be included (selectable from the interface) and some kind of integration with the book. The styling and script exec output could use some improvements too, but I don't really have any idea what to change.

@alvinhochun
Copy link
Contributor Author

Will you move the Playground to a permanent URL? Or keep the current rhai-demo?

I'll write up a chapter on it in "Getting Started".

I think I want to keep it for just test builds going forward. When I can finalize an initial release build I'd probably put it under /rhai-playground, unless, if you want to host it next to the book I'll let you handle it.

Suggestion: when the script comes back with an error (either syntax error or runtime error), use the line/position to display the source line in the result pane

Error reporting can use a lot of improvements. CodeMirror comes with an interesting Linter addon that I would want to try.

I've noticed however that the error positions are a bit off. For example, if you make an invalid escape sequence in a print call it reports an error at the start of the call instead of the string literal. But it could be just my code looking at the wrong place (need to check when I get back to it).

@schungx
Copy link
Collaborator

schungx commented Jun 29, 2020

if you want to host it next to the book I'll let you handle it.

That can also do. I can just put it inside the book. But you'll have to build it though...

Or, I think it is best to keep it under you under /rhai-playground until such time when we move to an organization.

For example, if you make an invalid escape sequence in a print call it reports an error at the start of the call instead of the string literal.

You caught a bug here. The error position is usually quite accurate... this is the first time in a long while that I found one off...

(EDIT) It is fixed.

Error reporting can use a lot of improvements.

Yes, right now it doesn't attempt to recover from errors. It just bugs out of there. Technically speaking, we should try to recover so more errors can be listed, but that would complicate the parser quite a bit as I can't simply ? my way out of all errors...

@alvinhochun
Copy link
Contributor Author

In the past day, I converted the playground to use Vue.js with a few minor changes and added the ability to stop an Async run. My plan following this is to try bootstrap-vue (I had a little bit of experience with it that I've mostly forgotten) so I can start improving the interface.

I've also set up a github action to automatically deploy to https://alvinhochun.github.io/rhai-playground-unstable/. Also because of this, the built files are available for download as artifacts (latest build at time of writing).

I think I'll keep https://alvinhochun.github.io/rhai-demo/ as a semi-stable version for now. I will redirect it to the new location in the future.

@alvinhochun
Copy link
Contributor Author

I've added the ability for the playground to be embedded on another page (see example). Though I haven't yet looked at how it can be included from mdBook. (Perhaps best open a separate issue for this?) Note: You probably don't want the playground to be loaded immediately on page load because the resources are a bit heavy compared to the rest of the book.

It is limited as in it can only run plain scripts without extra modules and no customizations with Rust. Custom modules in plain Rhai script should be doable in the future, but I don't think it will ever be possible to demo something like registering a custom type, without having to make a specific build with the Rust type already built-in. (rhaiscript/playground#2)

@schungx
Copy link
Collaborator

schungx commented Jul 4, 2020

How about something like a "click here to load the Playground" button?

I'll start figuring out how to embed JS scripts and custom HTML into mdbook...

There is a chapter in the Rhai book on the playground: https://schungx.github.io/rhai/start/playground.html

Right now it only contains a link. I think this can be beefed up with an embedded Playground!

I don't think it will ever be possible to demo something like registering a custom type

No I don't think it'll be possible also, short of compiling the Rust code.

@alvinhochun
Copy link
Contributor Author

How about something like a "click here to load the Playground" button?

I'll start figuring out how to embed JS scripts and custom HTML into mdbook...

There is a chapter in the Rhai book on the playground: https://schungx.github.io/rhai/start/playground.html

Right now it only contains a link. I think this can be beefed up with an embedded Playground!

Well, I was thinking of allowing Rhai code snippets in the Book to be loaded into and run on the playground, kind of like how they do with the Rust snippets There'd be a "play" button next to the code snippet that will load the playground inline with the code snippet.

For the playground page, I think a link should be enough...

@schungx
Copy link
Collaborator

schungx commented Jul 4, 2020

Well, I was thinking of allowing Rhai code snippets in the Book to be loaded into and run on the playground, kind of like how they do with the Rust snippets There'd be a "play" button next to the code snippet that will load the playground inline with the code snippet.

That would actually be cool!

But then I'd probably need to revise the code snippets to be higher quality. For example, right now I'm just doing:

let x = 42;
x == 42;        // x is 42

To be a self-running snippet, it probably needs to be:

let x = 42;
print(x);          // prints 42

In a screenful of examples, it may actually be less readable. I'll have to think about this.

And also, a lot of Rhai scripts in the book depend on registered functions to work, so unless we build separate WASM modules with different functions, we'll have a problem running them...

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jul 4, 2020

Well, I was thinking of allowing Rhai code snippets in the Book to be loaded into and run on the playground, kind of like how they do with the Rust snippets There'd be a "play" button next to the code snippet that will load the playground inline with the code snippet.

That would actually be cool!

But then I'd probably need to revise the code snippets to be higher quality. For example, right now I'm just doing:

let x = 42;
x == 42;        // x is 42

To be a self-running snippet, it probably needs to be:

let x = 42;
print(x);          // prints 42

In a screenful of examples, it may actually be less readable. I'll have to think about this.

Perhaps print(x == 42) is fine too, or perhaps with a function which prints a raw debug representation (like dbg!() in Rust)?

Or perhaps an REPL-style execution would work better for some snippets?

And also, a lot of Rhai scripts in the book depend on registered functions to work, so unless we build separate WASM modules with different functions, we'll have a problem running them...

For those examples I think you can just not enable the playground.

We can make it work if you are ok with building a playground with those functions included. But synchronizing the code between the build and the book will be a bit of trouble. I think it'll need a tool to automatically generate the code from the snippets in the book. Let's perhaps ignore that for now.

@schungx
Copy link
Collaborator

schungx commented Jul 4, 2020

Perhaps print(x == 42) is fine too, or perhaps with a function which prints a raw debug representation (like dbg!() in Rust)?

There is debug(...)...

Let's perhaps ignore that for now.

Yup.

@schungx
Copy link
Collaborator

schungx commented Jul 5, 2020

Hi @alvinhochun, if I add a lifetime and a reference to TokenizeState, will it screw up your playground?

@alvinhochun
Copy link
Contributor Author

Hi @alvinhochun, if I add a lifetime and a reference to TokenizeState, will it screw up your playground?

Yes, probably. The highlighting process requires TokenizeState to be passed to JavaScript, which cannot handle references at all. Preferably it should also be trivially serde-able, because when I hand out Rust objects to JavaScript and they got GC'ed without explicitly freed it actually leaks memory on the WASM heap, and the only way to prevent this is to convert it to a JS object.

@schungx
Copy link
Collaborator

schungx commented Jul 5, 2020

OK then, I'll find a way of not having to do this.

@alvinhochun
Copy link
Contributor Author

I'm thinking perhaps I should make a Reddit post about the playground, how do you feel about this?

@schungx
Copy link
Collaborator

schungx commented Jul 6, 2020

I'm thinking perhaps I should make a Reddit post about the playground, how do you feel about this?

This is a great idea. However, you really need a semi-permanent URL for this, because you don't want it to change later on.

Have you decided on the final URL for the playground?

Right now it is a couple of links to different versions.

Maybe have a landing URL that is the current stable version, then a link to "vnext" or "experimental" on the landing page?

@schungx
Copy link
Collaborator

schungx commented Jul 6, 2020

Also, right now, there is a noticeable pause when the user presses Run the first time. That is to wait for the WASM package to load. This is bad for user experience, especially the very first button click.

I'd suggest either pre-loading that WASM package or put up a spinning loading gif...

@alvinhochun
Copy link
Contributor Author

I'm thinking perhaps I should make a Reddit post about the playground, how do you feel about this?

This is a great idea. However, you really need a semi-permanent URL for this, because you don't want it to change later on.

Have you decided on the final URL for the playground?

Right now it is a couple of links to different versions.

Maybe have a landing URL that is the current stable version, then a link to "vnext" or "experimental" on the landing page?

I suppose I will be keeping https://alvinhochun.github.io/rhai-playground-unstable/. If you think it is necessary to have one to be called "stable" I would just copy the current build to https://alvinhochun.github.io/rhai-playground/.

@alvinhochun
Copy link
Contributor Author

Also, right now, there is a noticeable pause when the user presses Run the first time. That is to wait for the WASM package to load. This is bad for user experience, especially the very first button click.

I'd suggest either pre-loading that WASM package or put up a spinning loading gif...

How about I just print "Initializing Web Worker..." to the output box for now?

@schungx
Copy link
Collaborator

schungx commented Jul 7, 2020

Suggestion:

Right now it says "Running" on the Stop button's tooltip - if it is actually running, it'd start counting.

Therefore, you can change that text to "Initializing, please wait...", plus also put a message to the output box.

That should do it.

@alvinhochun
Copy link
Contributor Author

Ok, I've dealt with some of the issues. Most notably, there is now only one .wasm file which saves about 150KiB of download (gzipped) loading the worker. Frequent output printing no longer take as much resources (you can run primes.rhai with print without making the browser slow down to a crawl). I also add a loading tooltip like you suggested.

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jul 7, 2020

I made https://alvinhochun.github.io/rhai-demo/ a redirect. You can change all references to the new URL https://alvinhochun.github.io/rhai-playground-unstable/.

I'll probably make a Reddit post later today...

@schungx
Copy link
Collaborator

schungx commented Jul 10, 2020

@alvinhochun
Copy link
Contributor Author

I implemented a modules proof-of-concept on the modules-test branch (not deployed). (Honestly, I find my code quite amusing...)

So, I want to allow scripts to be added as modules with user-specified names, which can be used when running the main script. Of course, the Playground has to provide a way to manage the modules, perhaps even the ability for them to be exported and re-imported. Makes sense?

Or would you suggest me work on something else before that?

@schungx
Copy link
Collaborator

schungx commented Jul 12, 2020

So, I want to allow scripts to be added as modules with user-specified names, which can be used when running the main script. Of course, the Playground has to provide a way to manage the modules, perhaps even the ability for them to be exported and re-imported. Makes sense?

That would make it more than a simple Playground, though. You're going to be keeping a local store of scripts on behalf of users.

You may actually have a "standard" list of modules and hook up a module resolver so users can load different pre-built modules. But to allow users to register their own modules and use it in their scripts, that's going to add a whole new dimension of functionalities.

You might as well also have script storage.

@schungx
Copy link
Collaborator

schungx commented Jul 12, 2020

But still, having modules working in WASM is way cool!

@alvinhochun
Copy link
Contributor Author

alvinhochun commented Jul 12, 2020

That would make it more than a simple Playground, though. You're going to be keeping a local store of scripts on behalf of users.

You may actually have a "standard" list of modules and hook up a module resolver so users can load different pre-built modules. But to allow users to register their own modules and use it in their scripts, that's going to add a whole new dimension of functionalities.

You might as well also have script storage.

I had given script storage a bit of thought before. The "simple" way is to use localStorage to store scripts, but localStorage size is rather limited. I have no experience with IndexedDB and it seems a bit complicated on first glance. Therefore I want to leave this for later.

For now, I wanted to just make a per-session (i.e. until page refresh) script storage to start with. I can perhaps implement something like drag-drop to add files as scripts to make it easier to load user modules. If you want to suggest any "standard modules", I can include them too.

There is also the case of embedding - if a page embeds the playground, I would like the page to be able to provide predefined modules, and also not have it affect the local storage.

@schungx
Copy link
Collaborator

schungx commented Jul 12, 2020

For now, I wanted to just make a per-session (i.e. until page refresh) script storage to start with. I can perhaps implement something like drag-drop to add files as scripts to make it easier to load user modules. If you want to suggest any "standard modules", I can include them too.

Well, if you do that, people are gonna hate you... nobody wants to redo a whole bunch of modules the next time round.

I think indexDB is probably the best.

Actually localStorage shouldn't be too bad. The typical scripts are very short. They should fit comfortably inside localStorage.

@schungx
Copy link
Collaborator

schungx commented Aug 4, 2020

Pinging @alvinhochun ... The latest drop adds closures support. Would be really interested to see if running in the playground! :-D

@alvinhochun
Copy link
Contributor Author

Done, I've bumped it to Rhai 0.18.1 and it's deployed.

(As you can see, there had not been any changes recently because I shifted my focus to another project, but I will get back to it soon.)

@schungx
Copy link
Collaborator

schungx commented Feb 4, 2021

Since the Playground is now part of the org, closing this now.

@schungx schungx closed this as completed Feb 4, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants