Skip to content
Snippets Groups Projects

Slight touchups on the tokenizer

Merged Nev3r requested to merge tokentweaks into next

This alters how strings inside ""s are tokenized: instead of ignoring the "s, it considers all the content inside them as a single token. It also adds ; and = to the character ignore lists. The consequences for already existing parsers should be null.

Edited by Nev3r

Merge request reports

Loading
Loading

Activity

Filter activity
  • Approvals
  • Assignees & reviewers
  • Comments (from bots)
  • Comments (from users)
  • Commits & branches
  • Edits
  • Labels
  • Lock status
  • Mentions
  • Merge request status
  • Tracking
  • Loading
  • Loading
  • Loading
  • Loading
  • Loading
Please register or sign in to reply
Loading