Compilers: Register Allocation Against Complex Branching/Jumps -


i've taken side interest optimizers , how work, particularly respect register allocation. have of background in writing high-level interpreters didn't bother generate efficient machine code, parts revolving around compiler construction relate parsing, constructing asts, etc. straightforward me.

as learning project, i've been trying hand @ toy compiler higher level machine-level main difference being works variables rather registers.

where i'm confused low-level optimizer parts, respect register allocation ir , how affected branching/jumps, basic of heuristic algorithms excluding advanced topics ssa , phi nodes.

basic example:

a = ... b = ... c = ... d = ... e = ... f = ... g = ...  jump_if x == 1, section1 jump_if x == 2, section2 jump_if x == 3, section3 etc  = + b - c * 2 jump end  section1: ; kinds of stuff happens here of above variables jump end  section2: ; kinds of stuff happens here of above variables jump end  section3: ; kinds of stuff happens here of above variables jump section1 ; tricky branch!!!  end: 

and perhaps can throw in loopy logic , kinds of other branching make example more convoluted.

what don't understand of these branching paths might potentially make of variables above 'live' if consider them instead of each path individually.

what seems lacking me kind of stack-based block structure can have nested blocks register allocation can taken account variables referenced innermost block , outer blocks , execute register allocation heuristic separately on each block/branching path.

in higher-level ir that's more block-based, seems easier deduce branching paths because branching constrained within block, low-level ir abstracted above machine level has unconstrained branching can jump/branch on place?

most ir examples i've seen rather low-level abstractions of machine code, seem allow chaotic how our branching (ex: jump tables) in way might make difficult deduce such blocks/sections/paths.

how people typically deal problem? there algorithm , clean organization/code design can break down possible branching paths given such low-level code allows such flexible branches/jumps?

register allocation long standing problem. there have been number of research papers in area. recent popular algorithms in area ssa , linear scan register allocation.

static single assignment form (ssa) has gained popularity register allocation. idea convert program logic variable assigned once , every variable needs assigned before used. assumed there unlimited number of registers can used.

once program converted ssa form, can converted register allocation easier. can done graph coloring in polynomial time (the popular compiler llvm). pretty complex topic discuss here. recommend read couple of papers in area.

  1. efficiently computing static single assignment form , control dependence graph ron cytron, et al. 1 of pioneer in ssa area.

  2. single-pass generation of static single assignment form structured languages marc m. brandis, et al.

  3. a simple, fast dominance algorithm keith d. cooper, et al.

if don't want deal ssa intermediate step, might want take @ linear scan register allocation massimiliano poletto. greedy algorithm used in many non-llvm based compilers including v8 , jvm.


Comments

Popular posts from this blog

apache - PHP Soap issue while content length is larger -

asynchronous - Python asyncio task got bad yield -

javascript - Complete OpenIDConnect auth when requesting via Ajax -