Skip to content

Latest commit

 

History

History
576 lines (287 loc) · 34.8 KB

CHANGELOG.md

File metadata and controls

576 lines (287 loc) · 34.8 KB

CHANGELOG

v0.3.0 (2024-06-07)

Chore

  • chore: add release command (89e8758)

Feature

  • feat: optimize calling self._mark() (002d853)

  • feat: include generated parser in the repo (e87ed02)

Test

  • test: update test fixture (6999da8)

v0.2.0 (2024-06-03)

Chore

  • chore: add task to compile meta parser

and add more benchmarks (263f7f4)

Documentation

  • docs: baseline memory usage and parser file size (57b9f92)

Feature

  • feat: optimize gathered rule generation (6b38b59)

  • feat: handle elem* and elem+ repetition

without code duplication to reduce the generated parser size (0bd67ca)

  • feat: optimize getting locations

chore: merge frequently run profile suits (18b6be1)

  • feat: tersely pack return statement generated (afbcd9d)

  • feat: no more empty spaces in the generated parser actions (15d974c)

Fix

  • fix: test errors for the change in ensure_real function (fa37942)

Refactor

  • refactor: short start_location names in generated parser (4c011b2)

  • refactor: extract xonsh specific parser generation

to a separate module (e996217)

  • refactor: remove unused files from pegen project (cd1a65a)

  • refactor: optimize memoize_left_rec (3622d2e)

  • refactor: optimize memoize function

separate verbose mode operations (a14dba2)

Test

  • test: fix building parser fixture (9ead0e4)

  • test: remove memsuit as it is not relevant (8378811)

  • test: add large file benchmarking (d47e6d7)

v0.1.0 (2024-05-30)

Breaking

  • refactor!: make walrus operator default in tokenizer (8644077)

  • fix!: make tokenizer py3.9+

any minor version will likely break (5953b68)

  • refactor!: make async/await keywords

now the xonsh tokenizer is py3.7+ and py3.6 support is dropped (9912fe3)

Chore

  • chore: update profiling code (1ce7736)

  • chore: update benchmark suit (4a6c0da)

  • chore: update ci tests (b5ddd3b)

  • chore: cleanup repo (1cc3339)

  • chore: fix ci test (adcd2a8)

  • chore: exit at first fail (273254d)

  • chore: set timeout (4e02683)

  • chore: use latest pip in CI (de33bd4)

  • chore: setup github actions (202ac4f)

  • chore: update tasks (25550f8)

  • chore: add pytest-testmondata (d16a1b1)

  • chore: add task to test (91ff36b)

  • chore: update tests (ecddd55)

  • chore: use single xonsh.gram (d8decfd)

  • chore: add ipython (c1aa19e)

  • chore: update tasks (cb6d0a1)

  • chore: add flask deps for pegen-web module (88bcc1a)

  • chore: update mypy config (ce9ade6)

  • chore: remove taskfiles (cf45fb7)

  • chore: update ruff settings (3b68f45)

  • chore: upgrade pre-commit plugins (d66b247)

  • chore: update taskfile (e3a9c41)

  • chore: update ignore (f579b17)

  • chore: upgrade pre-commit plugins (daafc93)

  • chore: test mem usage (9e09af3)

  • chore: black format ply other than yacc.py (a589ea8)

  • chore: consider rust bindings (e45d3cd)

  • chore: add monkeytype commands (8cb16ae)

  • chore: use monkeytype annotate code (5ee23c6)

  • chore: add pre-commit (f8a2ddd)

  • chore: use asv for benchmarks (e7e9eb7)

  • chore: use pdm for installing deps (27b1095)

  • chore: add tasks (63845c8)

  • chore: ignore IDE (c167e8f)

  • chore: initial commit generated from cookiecutter

https://github.com/frankie567/cookiecutter-hipster-pypackage (d6d80c5)

Documentation

Feature

  • feat: compile with mypyc optionally (e4dd77a)

  • feat: strict mypy checking (caba69c)

  • feat: parse f-strings py312 (da9369f)

  • feat: tokenize py312 fstrings

https://peps.python.org/pep-0701/ (5955aa5)

  • feat: use enums for Tokens

and parse exact-tokens as OP (d7a146b)

  • feat: generate parser during build (021c6fe)

  • feat: implement with macro multi indents (43a0f20)

  • feat: implement with-macros single indent (a739f72)

  • feat: enable handling subproc macros (2f67e8c)

  • feat: handle parenthesis inside macros (b204689)

  • feat: handle macro parameters with whitespace (9d72d91)

  • feat: tokenize whitespaces/Operators as their own Tokens instead of OP (2d20781)

  • feat: ability to accept hard keywords in macros

and sub-procs (72cdb16)

  • feat: implement macros basic level (5637e97)

  • feat: make whole test suit pass or xfail (222c520)

  • feat: implement &&, || combinators (cde57db)

  • feat: implement path-search regexes (baa0f09)

  • feat: implement help? syntax (02b5165)

  • feat: handle adjacent replacement and pass as *cmds

prefix/suffix to the $() (6ee7b5f)

  • feat: implement @() - python-expr operator (c309d59)

  • feat: implement @$() - subproc_injection (731e02a)

  • feat: implement !(), ![], $[] operators (8081eb6)

  • feat: implement splitting by WS/NL (fedc23f)

  • feat: tokenize search-path (2b0972d)

  • feat: add $() handling simple cases (55063a7)

  • feat: implement env names and env expressions

$env and ${expr} (8c18a6d)

  • feat: tokenize xonsh operators separately

instead of returning token.OP (4d75c4e)

  • feat: support py311 & py312

https://github.com/we-like-parsers/pegen/pull/95/files (c5e4f74)

  • feat: add tokenize code for untokenizer from py312 stdlib (4b7bcda)

  • feat: add fstring tokens from py3.12 (8426520)

  • feat: implement parsing $env vars (b5d8c0d)

  • feat: make parser py39+ (c27828b)

  • feat: handle tokens separately in generator (fe55745)

  • feat: handle loading ply parser (387519b)

  • feat: use taskfile.yml with source watch (b88ce44)

  • feat: pass custom token set to PythonParserGenerator (506e3d1)

  • feat: add pegen from CPython/Tools (e8e36a7)

  • feat: handle env names in tokens (8eef67e)

  • feat: implement path literals (4d30061)

  • feat: simplify tokenize.py (64385ae)

  • feat: include tests from xonsh (e3eee5d)

  • feat: use tokenizer from package (3ae4e26)

  • feat: add xonsh tokenize (584ad77)

  • feat: move towards custom tokenizer (e282596)

  • feat: add tests from pegen site (f7ab935)

  • feat: add pegen project files (c63ac43)

  • feat: add peg_parser from parser (9c2958d)

  • feat: update tokenizer changes from xonsh v0.16.0 (3e654a2)

  • feat: add mypyc pickle (b79ca3f)

  • feat: add mypyc compiled data format (6230653)

  • feat: able to load multiple format lr-tables (c934f65)

  • feat: add benchmark for different type of tables (001cb3f)

  • feat: exp-1 initial sizes (98ed879)

  • feat: support to writing to Python lr-tables (2170de3)

  • feat: add tests from xonsh repo (306877c)

  • feat: add preprocessing based parser (cb19df4)

  • feat: include tokenize_rt module from

https://github.com/asottile/tokenize-rt/blob/c2bb6f32371408c0490e817b6dd48285d804e36d/tokenize_rt.py (55f5d3e)

  • feat: write python lr-table (e3746f0)

  • feat: add execer from xonsh package (47b76f4)

  • feat: use setuptools as package builder (c61de1e)

  • feat: optimize having actions/gotos as tuple instead of dict

unnecessarily dict was used as a container to hold int keys (fe1ee46)

  • feat: improve type annotations (9ac82d2)

  • feat: tracemalloc benchmarking (1c30fed)

  • feat: the first optimization iteration worked

the parsing speed and memory usage (7e18907)

  • feat: optimize loading as pickle file v5 (34a01df)

  • feat: add support for loading generated table (7201341)

  • feat: add a function to write parser table as json (8a8f3f0)

  • feat: add benchmark to track parser size (5700928)

  • feat: update to ply new format (16d5f5c)

  • feat: copy ply yacc from subtree (18ef242)

  • feat: copy parser files from xonsh repo

commit: 12ab76e5359899efd51fa340ebc0c9a24bad3682 (ee24ec9)

Fix

  • fix: handle fstrings with newlines (6da8f1c)

  • fix: remove duplicate annotated_rhs from CPython PR

https://github.com/python/cpython/pull/117004/files (95e0f68)

  • fix: update tests for the change in using enum tokens (12690cd)

  • fix: clash between proc and with macros (7f28ae8)

  • fix: handle sub procs regression fails (f7ae053)

  • fix: import Target for del tests (ad81128)

  • fix: store env variable case (8a74396)

  • fix: implement parenthesis level for xonsh tokens (f1a36b1)

  • fix: deprecation warning (8ee4720)

  • fix: deprecation warning ast.Str (7908f09)

  • fix: deprecation warning ast.Str (ecc96b7)

  • fix: update tests of older versions than py39 (de3674c)

  • fix: exporting parser table as jsonl (de239da)

  • fix: ply parser has to return None in case of failure (92d25bd)

  • fix: tokenizing with correct lexpos (62c3f1f)

  • fix: the type returned after parse (c0ed347)

  • fix: update tests (3976f97)

  • fix: type annotate the code fully (736622e)

  • fix: update type hint import (ca9c53d)

  • fix: ruff linter errors and disable mypy (0d1b772)

Refactor

  • refactor: simplify token string handling (c75f977)

  • refactor: update tokenizing WS and simplify psuedo match (6c245fa)

  • refactor: mark symbols with single quotes (9c7b2a3)

  • refactor: use symbols directly in grammar spec for clarity (cf5e6ce)

  • refactor: restructure the code

make pegen the main (1975bd6)

  • refactor: move tasks out (05c80ad)

  • refactor: cleanup functions (2c69ae0)

  • refactor: handle OP tokens separately

as exact_token_types (422bcde)

  • refactor: code cleanup (19337a8)

  • refactor: cleanup tokenizer (b80380f)

  • refactor: update tokenizer (fae1b5a)

  • refactor: enable more ruff plugins (244248a)

  • refactor: update ${..} handling (f679334)

  • refactor: move tests out of package (5ed245e)

  • refactor: update xonsh token names (3274008)

  • refactor: simplify tokenize.py further

  • remove bytes handling
  • move contstr handling to its own function (e7380e7)
  • refactor: simplify tokenize.py with states (29133e0)

  • refactor: move untokenize to its own module (8396d1c)

  • refactor: remove ply based parser dir (a18b823)

  • refactor: make parser py39+ and optimize imports (8a94ca2)

  • refactor: adding from we-like-parsers/pegen (93fae9e)

  • refactor: adding from we-like-parsers/pegen (7f8d9a3)

  • refactor: move parse methods to class (05edb2a)

  • refactor: move ply tests (087dbeb)

  • refactor: update tokenizer code (3b2268b)

  • refactor: copy tokenize from python stdlib v310 (2f51788)

  • refactor: move tokens (f9e1d4d)

  • refactor: ruff style (f96462e)

  • refactor: overwrite header (40ba0a9)

  • refactor: accept str path to load parser (a36dfe8)

  • refactor: merge overridden actions to base class (ba48b11)

  • refactor: use unparse to test parsing (bbe4043)

  • refactor: generate docstring dynamically

instead of creating functions multiple times during runtime (b8be353)

  • refactor: improve typing of ply lrparser module (2fea65d)

  • refactor: type lexer and parser modules (bf10aaa)

  • refactor: option to debug parser generation (41a3345)

  • refactor: update to fstring from sly (8173021)

  • refactor: update sample usage (47de232)

  • refactor: Yaccsymbol.slots (058ec50)

  • refactor: update benchmark time function (3c81a59)

  • refactor(ply): split table generator and loader

to optimize the loading time. generating is mostly done one time (273df2b)

  • refactor(ply): remove unused global variable (223127a)

  • refactor: return class so options can be passed (5507cbc)

  • refactor: update imports and functions missing from xonsh (b49c22e)

  • refactor: update lexer functions from xonsh.tools (066c78e)

Style

  • style: update benchmark file (63e1d5d)

  • style: add annotations to lexer (c567d7c)

Test

  • test: update fstring tests with xonsh symbols (c7679d4)

  • test: parameterize test cases (007deb5)

  • test: rename file (3802dde)

  • test: update tests (e25b7e7)

  • test: more passing tests (1ea9327)

  • test: changes in lexer tests operator token handling change (f0f74cf)

  • test: now test_invalid works (cb95266)

  • test: post verbose parser output upon first 3 fails (396e304)

  • test: organize tests data (a5ef7db)

  • test: organize tests (d76ba9e)

  • test: tidy test cases (46a91af)

  • test: fix test data (337213b)

  • test: update parser tests (d66124a)

  • test: move test cases to files

and split big test_parser.py (e85b333)

  • test: remove pure python tests

as it is already covered in tests/test_ast_parsing.py (b5fe044)

  • test: update tests for the tokenizer (a06b9f4)

  • test: update tests to mark xfail xonsh tokens (bf7c697)

  • test: fix test errors/fails of missing fixtures (f52eaf9)

  • test: update tests and fix mypy errors (4a6dbd6)

  • test: rerun parse if previous failed (b41e315)

  • test: update tests to use own tokenizer (8c2c4d1)

  • test: add pre-processor based tests (def09e6)

  • test: add test files from xonsh repo (deca176)

  • test: rename basic sanity tests (190a0ea)

  • test: update sample test (5898e7b)

  • test: xfail xonsh session dependent tests (8f8367a)

  • test: make ast tests pass (8becc23)

  • test: split parser tests (050aaee)

  • test: add test files from xonsh repo (c300165)

  • test: add invalid state (7bebe43)

Unknown