[ale] Linux kernel

Jon "maddog" Hall jon.maddog.hall at gmail.com
Fri Nov 17 08:15:17 EST 2023


Steve,

>After all, Dbase was an interpreter so it had to be reparsed on every
execution.

Interpreters are separate beasts all to themselves, and I have never
studied Dbase, but some interpreters keep a partial compilation around and
only when the source code changes do they re-evaluate the source code.
Often interpreters are used instead of compilers because they have a lot
more data around for debugging.  For example, they can actually point to
the source code statement and data item that causes an overflow or other
exception.

>Speaking of all that, what do you know about P-Code?

It has been a long time since I looked at it, but P-Code is more or less a
generic term for the level of compilation that creates "atoms", which can
then be executed across architectures.   All of the lexical and syntactical
analysis has been done and the "atoms" are generated that normally would go
to the code generator (perhaps with some high-level optimizations done
first) to generate the actual architecture-specific binaries.

Imagine that each atom generated by the compiler simply calls a subroutine
on the target system and that subroutine executes what the atom said to
do.   This has been used by Pascal (the first time I heard of P-code), but
has also been used by Java (Javabeans are the essence of this) and other
systems.   A sub-class of this is what is known as "threaded interpretive
languages".

This can create executions as fast or even faster than compiled code, as
the "subroutines" can be tuned very tightly to the architecture, and very
highly optimized.

>If I ever get to the point where my Stylz compiler is too slow, I'll write
back to you for advice.

It has been over thirty years since I taught compiler design, although I do
take an interest from time to time in optimization techniques.
When it comes to your compiler being too slow, I would first profile the
complier and see where it is spending its time and why just like I would do
any program.

A friend of mine, Tom Tresvik (unfortunately now deceased) took the
compiler that was compiling the Ultrix kernel and put it on a RAM disk.
Reducing the I/O time from taking the source code and temporary files off
the spinning disk and putting it in a RAM disk made the 24-hour compilation
time of the Ultrix kernel on a VAX-11/780 to less than an hour.

If I go on any further I might go into a deep discussion about
optimizations, how important they are and how some people say that "memory
is cheap, processors are fast and cores are plentiful and optimization is
not really needed"....a statement *almost* as stupid as "comments slow
compilation".   Actually there are many topics that people should really
understand, but don't, and just thinking about these make me upset.

Warmest regards,

maddog



On Thu, Nov 16, 2023 at 8:08 PM Steve Litt via Ale <ale at ale.org> wrote:

> Thanks maddog!
>
> I would have figured the lexical analysis would have gotten rid of the
> comments right away so nothing downstream had to deal with it. If I
> ever get to the point where my Stylz compiler is too slow, I'll write
> back to you for advice. Right now my level of understanding is so low
> that it sounds like a pipedream, but I think I can do it.
>
> Copy that on comments making trouble with paper tape and 300 baud
> modems. In fact, my buddy Bill who is a huge Foxbase expert, told me
> around 1987 that for performance reasons he had to keep variable names
> short in software he wrote in Dbase. After all, Dbase was an
> interpreter so it had to be reparsed on every execution. Ugh!
>
> Speaking of all that, what do you know about P-Code?
>
> Thanks,
>
> SteveT
>
> Jon "maddog" Hall said on Thu, 16 Nov 2023 07:43:09 -0500
>
> >Steve,
> >
> >Different compilers are written in different ways, but in general you
> >are right.   Compared to the lexical analysis, syntactical analysis and
> >optimization stages (ESPECIALLY OPTIMIZATION) the tokenization stage
> >(which includes removing comments) is a blip in the timeline of
> >compilation.
> >
> >I will say that comments DID have a real impact when compilers were
> >reading in source code on paper tape on an ASR-33 Teletype, but that
> >was the last time I even considered the issue oft compilation time
> >being slowed down by comments.
> >
> >md
> >
> >On Wed, Nov 15, 2023 at 10:08 PM Steve Litt via Ale <ale at ale.org>
> >wrote:
> >
> >> Jon "maddog" Hall via Ale said on Wed, 15 Nov 2023 19:28:38 -0500
> >>
> >> >The other thing that surprised me was the passage that said
> >> >"Comments just slow down the compilation".
> >> >
> >> >Paaaaleese!  Even the heaviest commented source code on a compiled
> >> >program would be a breeze for the compiler to handle.  This excuse
> >> >is fake news.
> >>
> >> Maddog, is the first step, before lexical analysis, replacing all
> >> space characters and newlines with tokens so the lexical analyzer
> >> sees the program as a single string?
> >>
> >> If that's the case, I can see where // to the next linefeed and /* to
> >> the next */ would be very fast.
> >>
> >> Thanks,
> >>
> >> SteveT
> >>
> >> Steve Litt
> >>
> >> Autumn 2023 featured book: Rapid Learning for the 21st Century
> >> http://www.troubleshooters.com/rl21
> >> _______________________________________________
> >> Ale mailing list
> >> Ale at ale.org
> >> https://mail.ale.org/mailman/listinfo/ale
> >> See JOBS, ANNOUNCE and SCHOOLS lists at
> >> http://mail.ale.org/mailman/listinfo
> >>
>
>
> SteveT
>
> Steve Litt
>
> Autumn 2023 featured book: Rapid Learning for the 21st Century
> http://www.troubleshooters.com/rl21
> _______________________________________________
> Ale mailing list
> Ale at ale.org
> https://mail.ale.org/mailman/listinfo/ale
> See JOBS, ANNOUNCE and SCHOOLS lists at
> http://mail.ale.org/mailman/listinfo
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <https://mail.ale.org/pipermail/ale/attachments/20231117/1c69b4e2/attachment.htm>


More information about the Ale mailing list