This time with more manual checking and using git blame -M -C, so that
a few cases of copied code get a copyright notice corresponding to
their initial introduction.
The method to get the C4AulScript instance that stores the bytecode of a
function is ridiculous, but at least it's now capsuled in a function. While
at it, this also always stores the bytecode index instead of a bytecode
pointer in the function.
SPos is only needed to display debug messages. Not having that data in the
cache for normal execution speeds up script execution by ten percent on at
least one artificial benchmark.
Previously, the tokenizer would emit special tokens for a few keywords. Now
every keyword is handled by the parser. This allows one to use keywords as
identifiers everywhere it is unambiguous.
This is the only operator that is necessary for the ActMaps. Ideally, every
operator would work, but I don't know how to achieve that without massive
code duplication. So just duplicate a little to make it work.
Currently kills:
* a[...] = ... -> Currently a copy is changed and then discarded, need to rework array semantics.
* bla->EffectVar(...) = ... -> Normal EffectVar is rewritten to SetEffectVar - but this won't work for calls. We need a better solution anyway.
* All scripts that use references, obviously. Just have a look at the parser warnings.
... which evaluates to a copy of the elements with indices in the range
[begin,end), where either index might be omitted, defaulting to 0 or the
length of the string respectively
Before, var a = nil; a+=25; would result in any/25, violating a constraint.
The operators also didn't check that the input was an integer.
/= and / now raise an exception on division-by-zero.
At the moment, only function parameters are checked, and only immediate
type errors are catched - if there's a variable in between, the parser
won't see it. Still useful to catch some errors before running the
code.
The C4ID syntax removal was not thorough enough: The tokenizer did
nothing in the C4ID state anymore except traversing the string, but not all
instances of reaching that state were removed. As a result, the tokenizer
read past the script end.