Weird Machines Exploitability
Weird Machines Exploitability
Thomas Dullien
[email protected]
IF condition b:
Store pair in memory (B) ∀(p ′, s ′ ) ∈ Memory : p ′ , p
Memory ← Memory ∪ {(p, s)} |Memory| ≤ 4999
s , 0, p , 0
IF condition d:
Output error message (D) s=0
print(0) ∨p = 0
∨ |Memory| = 5000
p0 s0 n0 p1 s1 n1 p2 s2 n2
free_head=9
p0 s0 n0 p1 s1 n1 p2 s2 n2 pd sd nd
free_head=12
p0 s0 n0 p1 s2 n3 p2 s2 n2 pd sd nd
free_head=6
p0 s0 n0 p1 s1 n1 p2 s2 n2 pd sd nd
free_head=3
8
Step 3.3: Attacker sent (p2 , X ), (p1 , X ), (p3 , s 3 )
p0 s0 n0 p3 s3 n3 p2 s2 n2 pd sd nd
free_head=6
p0 s0 n0 p3 s3 n3 p4 s4 n4 pd sd nd
free_head=12
p0 s0 n0 p3 s3 n3 p4 s4 n4 pd sd nd
free_head=12
Step 5: The attacker sends (s 4 , X ). The machine follows
the linked list, interprets s 4 as password, and outputs n 4 .
p0 s0 n0 p3 s3 n3 p4 s4 n4 pd sd nd
free_head=12
The machine then sets the three cells to be free,
and overwrites the storted pd with free-head.
9
The machine just overwrote pd with free-head.
p0 s0 n0 p3 s3 n3 p4 s4 n4 12 sd nd
free_head=7
a greater gap between it’s probability of success and the security is not either, and the game proceeds normally without attacker
boundary: advantage.
|o exploit | For case 3: If p was previously known, the attacker sends (p, 0),
Θexploit = arg max P[s ∈ o IFSM ] − receives s, and then sends (p, s ⊕ 2i ). If p was not known to the
exploit 231
attacker, the game proceeds normally without attacker advantage.
.
For case 5: The value p = 2i must have been known, and the
We now state two lemmas describing the set of states reachable
transition can be emulated by simply sending (2i , x).
by an attacker. No proof is given, but they are easily verified by
□
inspecting the code.
Lemma 1. All states in Qcpu t r ans are of the following form: q ∈ This means that the transitions that the attacker gains that help
sane
Qcpu with exactly one partially-stored tuple (corresponding to pro- him transit from one sane state to another, but along an unintended
gram lines 36 and 37) - a short time period where one of the memory path, do not provide him with any significant advantage over an
cells contains a p , 0 with a stale s. attacker that can not corrupt memory. What about the transitions
that lead to weird states?
Lemma 2. An attacker that can flip a bit can only perform the
following 5 transitions: Lemma 3. For any sequence of state transitions that successfully
violates the security property, there exists a p ′ which is never sent by
(1) Replace a (p, s) tuple in memory with (p ⊕ 2i , s).
either party.
(2) Transition a state with memory containing two tuples (p, s 1 ), (p⊕
2i , s 2 ) into a state where memory contains (p, s 1 ), (p, s 2 ). Proof. Any sequence for which such a p ′ does not exist is of
(3) Replace a (p, s) tuple in memory with (p, s ⊕ 2i ) length 232 − 1 and can hence not break the security property. □
(4) Replace a (p, 2i ) tuple with (p, 0)
(5) Replace a (2i , s) tuple with (0, s) Theorem 2. Any sequence of state transitions during a successful
sane to Q sane . Only 2
Note that 1, 3 and 5 are all transitions from Qcpu attack that uses transition 2 can only produce output that is a proper
cpu
weir d .
and 4 lead to Qcpu subsequence of the output produced by an attacker that cannot flip
memory bits, with a maximum of 10000 extra steps.
Now consider S ∈ Qcpun the sequence of state transitions of Q
cpu
for a successful attack by Θexploit . Proof. For case 2:
Given that the attacker only gets to flip a bit once, the sequence
Theorem 1. Any sequence of state transitions during a successful S will of the form
attack that use transitions 1, 3, or 5 above can be emulated by an
attacker that can not flip memory bits in at most 10000 steps. (qsane )n1 →t2 (qweir d )n2 →t2′ (qsane )n3
Proof. For all cases, the attacker without the ability to flip bits with n 3 possibly zero. The weird state the attacker enters with t 2 is
sends (pi , x i ) tuples to fill all empty cells preceding the cell in which identical to a sane state except for a duplicate entry with the same
Θexploit flips a bit, performs the action described, and then sends p. From this state on, there are two classes of interactions that can
(pi , x i ) to free up these cells again. We denote an arbitrary value occur:
with x. (1) A tuple (p, x) is sent, which transitions cpu via t 2′ back into
For case 1: If p was previously known to the attacker, an attacker a sane state.
without the ability to flip bits can simply send (p, x), receive s, and (2) A tuple (p ′ , p, x) is sent, which transitions into another
send (p ⊕ 2i , s). If p was not previously known to the attacker, p ⊕ 2i state in the same class (sane except duplicate p).
10
An attacker without bit flips can produce an output sequence that exploitation of hardware-failure-induced random bit flips [17]. In
contains the output sequence of the attacker with bit flips as follows: all of these cases, large percentages of the security and computer
(1) Perform identical actions until the bit flip. science community were convinced that the underlying memory
(2) From then on, if p ⊕ 2i is sent, replace it with p ′ . corruption could not be leveraged meaningfully by attackers, only
(3) If p is sent and the address of the cell where p is stored is to be proven wrong later.
less than the address where p ′ is stored, proceed normally It is difficult to reason about the computational power of a given
to receive s 1 . Next weird machine: After all, a vulnerability provides an assembly lan-
(a) Send (p ′, x), receive s 2 . guage for a computer that has never been programmed before,
(b) Fill any relevant empty cells. and that was not designed with programmability in mind. The
(c) Send (p, s 2 ). inherent difficulty of making statements about the non-existence
(d) Free the temporary cells again. of programs in a given machine language with only empirically
(4) If p is sent and the address of the cell where p is stored accessible semantics may be one of the reasons why statements
is larger than the address where p ′ is stored, replace the about non-exploitability are difficult.
sending of p with p ′ . Furthermore, many security vulnerabilities have the property
(5) Other operations proceed as normal. that many different initial states can be used to initialize the weird
□ machine, further complicating matters: One needs to argue over all
possible transitions into weird states and their possible trajectories
Theorem 3. Any sequence of state transitions during a successful thereafter.
attack that uses transition 4 can only produce output that is a proper
subsequence of the output produced by an attacker that cannot flip
memory bits.
6.2 Making statements about
non-exploitability is possible
Proof. The same properties about the weird state only transi- While making statements about non-exploitability is supremely
tioning into another weird state of the same form or back into a difficult for complex systems, somewhat surprisingly we can con-
sane state that held in the proof for transition 2 holds for transition struct computational environments and implementations that are
4. To produce the desired output sequence, the attacker without bit provably resistant to classes of memory-corrupting attackers.
flips simply replaces the first query for p after the bit flip with the This may open a somewhat new research direction: What data
query (0, 0). □ structures can be implemented with what level of resiliency against
We have shown that we can emulate any bit-flipping attacker in memory corruptions, and at what performance cost?
a maximum of 10000 steps using a non-bit-flipping attacker.
Since we assumed that our bit-flipping attacker can obtain an 6.3 Mitigations and their utility
attack probability
Computer security has a long history of exploit mitigations - and
|o exploit | bypasses for these mitigations: From stack cookies [7, 15] via ASLR
P[s ∈ o IFSM ] >
231 [21] to various forms of control-flow-integrity (CFI) [1, 9, 22] . The
it follows that the emulation for the bit-flipping attacker by a non- historical pattern has been the publication of a given mitigation,
bit-flipping attacker achieves followed by methods to bypass the mitigations for particular bug
|o exploit | + 10000 |o exploit | instances or entire classes of bugs.
P[s ∈ o IFSM ] > > In recent years, exploit mitigations that introduce randomness
231 232 into the states of cpu have been very popular, ranging from ASLR
This contradicts our assumption that the non-bit-flipping at- [21] via various heap layout randomizations to efforts that shuf-
tacker cannot beat our security boundary, and hence proves that a fle existing code blocks around to prevent ROP-style attacks. It
bit-flipping attacker cannot get an advantage of even a single bit has often been argued (with some plausibility) that these prevent
over a non-bit-flipping attacker. exploitation - or at least "raise the bar" for an attacker. While in-
troducing unpredictability into a programming language makes
6 CONSEQUENCES programming more difficult and less convenient, it is somewhat un-
There are a number of consequences of the previous discussion; clear to what extent layering such mitigations provides long-term
they mostly relate to questions about mitigations, demonstrating obstacles for an attacker that repeatedly attacks the same target.
non-exploitability, and the decoupling of exploitation from control An attacker that deals with the same target program repeat-
flow. edly finds himself in a situation where he repeatedly programs
highly related computational devices, and it is doubtful that no
6.1 Making statements about weird machine program fragments exist which allow an attacker
non-exploitability is difficult to achieve security violations in spite of not knowing the exact
Even experts in computer security routinely make mistakes when state of cpu from the outset. It is imaginable that the added benefit
assessing the exploitability of a particular security issue. Examples from increasing randomization beyond ASLR vanishes rapidly if
range from Sendmail bugs [19] via the famous exploitation of a the attacker cannot be generically prevented from reading crucial
memcpy with ’negative’ length in Apache [18] to the successful parts of memory.
11
Mitigations should be preferred that detect corruptions and large [3] Sergey Bratus, Julian Bangert, Alexandar Gabrovsky, Anna Shubina, Michael E.
classes of weird states in order to terminate the program quickly. 4 Locasto, and Daniel Bilar. 2014. Ẃeird MachineṔatterns. See [2], 157–171.
https://doi.org/10.1007/978-3-319-04447-7_13
Ideally, mitigations should work independently of the particular [4] Sergey Bratus, Michael E. Locasto, Len Sassaman Meredith L. Patterson, and
way the attacker chooses to program the weird machine. Mitigations Anna Shubina. 2011. Exploit Programming: From Buffer Overflows to Weird
that only break a particular way of attacking a vulnerability are Machines and Theory of Computation. j-LOGIN 36, 6 (Dec. 2011), 13–21. https:
//www.usenix.org/publications/login/december-2011-volume-36-number-6/
akin to blacklisting a particular programming language idiom - exploit-programming-buffer-overflows-weird
unless the idiom is particularly important and unavoidable, odds [5] Michael C. Browne, Edmund M. Clarke, and Orna Grumberg. 1988. Characteriz-
ing Finite Kripke Structures in Propositional Temporal Logic. Theor. Comput. Sci.
are that an attacker can work around the missing idiom. While this 59 (1988), 115–131. https://doi.org/10.1016/0304-3975(88)90098-9
certainly creates a cost for the attacker, the risk is that this is a [6] Stephen A. Cook and Robert A. Reckhow. 1972. Time-bounded Random Access
one-off cost: The attacker only has to construct a new idiom once, Machines. In Proceedings of the Fourth Annual ACM Symposium on Theory of
Computing (STOC ’72). ACM, New York, NY, USA, 73–80. https://doi.org/10.1145/
and can re-use it for multiple attacks on the same target. 800152.804898
[7] Crispin Cowan, Calton Pu, Dave Maier, Heather Hintony, Jonathan Walpole,
6.3.1 Limitations of CFI to prevent exploitation. It should be Peat Bakke, Steve Beattie, Aaron Grier, Perry Wagle, and Qian Zhang. 1998.
noted that both examples under consideration in this paper exhib- StackGuard: Automatic Adaptive Detection and Prevention of Buffer-overflow
Attacks. In Proceedings of the 7th Conference on USENIX Security Symposium -
ited perfect control-flow-integrity: An attacker never subverted Volume 7 (SSYM’98). USENIX Association, Berkeley, CA, USA, 5–5. http://dl.acm.
control flow (nor could he, in the computational model we used). org/citation.cfm?id=1267549.1267554
Historically,attackers preferred to obtain control over the instruc- [8] Thomas Dullien. 2011. Exploitation and state machines. In Infiltrate Offensive
Security Conference. Miami Beach, Florida. http://www.slideshare.net/scovetta/
tion pointer of cpu - so most effort on the defensive side is spent on fundamentals-of-exploitationrevisited
preventing this from happening. It is likely, though, that the reason [9] Enes Göktas, Elias Athanasopoulos, Herbert Bos, and Georgios Portokalidis.
why attackers prefer hijacking the instruction pointer is because 2014. Out of Control: Overcoming Control-Flow Integrity. In Proceedings of the
2014 IEEE Symposium on Security and Privacy (SP ’14). IEEE Computer Society,
it allows them to leave the ”difficult” world of weird machine pro- Washington, DC, USA, 575–589. https://doi.org/10.1109/SP.2014.43
gramming and program a machine that is well-understood with [10] Jan Friso Groote and Frits W. Vaandrager. 1990. An Efficient Algorithm for
Branching Bisimulation and Stuttering Equivalence.. In ICALP (2009-09-19) (Lec-
clearly specified semantics - the cpu. It is quite unclear to what ture Notes in Computer Science), Mike Paterson (Ed.), Vol. 443. Springer, 626–638.
extent perfect CFI would render attacks impossible, and depends http://dblp.uni-trier.de/db/conf/icalp/icalp90.html#GrooteV90
heavily on the security properties of the attacked program, as well [11] Sean Heelan. 2010. Misleading the public for fun and profit.
https://sean.heelan.io/2010/12/07/misleading-the-public-for-fun-and-profit/
as the other code it contains. (Dec. 2010).
An excessive focus on control flow may set wrong priorities: [12] Hong Hu, Shweta Shinde, Sendroiu Adrian, Zheng Chua Leong, Prateek Saxena,
Exploitation can occur without control flow ever being diverted, and Zhenkai Liang. 2016. Data-Oriented Programming: On the Expressiveness
of Non-Control Data Attacks. In 37th IEEE Symposium on Security and Privacy,
and the only thing that can obviously be prevented by perfect CFI San Jose, CA, US, May 2016.
are arbitrary syscalls out-of-sequence with the normal behavior [13] Yoongu Kim, Ross Daly, Jeremie Kim, Chris Fallin, Ji Hye Lee, Donghyuk Lee,
Chris Wilkerson, Konrad Lai, and Onur Mutlu. 2014. Flipping Bits in Memory
of the program. While this in itself may be a worthwhile goal, the Without Accessing Them: An Experimental Study of DRAM Disturbance Errors.
amount of damage an attacker can do without subverting control SIGARCH Comput. Archit. News 42, 3 (June 2014), 361–372. https://doi.org/10.
flow is substantial. 1145/2678373.2665726
[14] Gene Novark, Emery D. Berger, and Benjamin G. Zorn. 2007. Exterminator:
Automatically correcting memory errors with high probability. In In Proceedings
6.4 Acknowledgements of the 2007 ACM SIGPLAN Conference on Programming Language Design and
Implementation, ACM. Press.
This paper grew out of long discussions with and benefited from [15] Gerardo Richarte. 2002. Four different tricks to bypass StackShield and Stack-
suggestions given by (in random order): Felix Lindner, Ralf-Philipp Guard protection. World Wide Web 1 (2002).
Weinmann, Willem Pinckaers, Vincenzo Iozzo, Julien Vanegue, [16] Felix Schuster, Thomas Tendyck, Christopher Liebchen, Lucas Davi, Ahmad-Reza
Sadeghi, and Thorsten Holz. 2015. Counterfeit Object-oriented Programming:
Sergey Bratus, Ian Beer, William Whistler, Sean Heelan, Sebas- On the Difficulty of Preventing Code Reuse Attacks in C++ Applications.. In
tian Krahmer, Sarah Zennou, Ulfar Erlingsson, Mark Brand, Ivan IEEE Symposium on Security and Privacy. IEEE, IEEE Computer Society, 745–762.
http://dblp.uni-trier.de/db/conf/sp/sp2015.html#SchusterTLDSH15
Fratric, Jann Horn, Mara Tam and Alexander Peslyak. [17] Mark Seaborn and Thomas Dullien. 2015. Exploiting the DRAM rowhammer bug
to gain kernel privileges. http://googleprojectzero.blogspot.fr/2015/03/exploiting-
APPENDIX dram-rowhammer-bug-to-gain.html. (March 2015).
[18] GOBBLES Security. 2002. Ending a few arguments with one simple attachment.
Appendix A: Program listing for the flat-array BugTraq Mailing List 1 (June 2002).
[19] LSD Security. Technical analysis of the remote sendmail vulnerability. Email
variant posted to Bugtraq Mailing List, http://seclists.org/bugtraq/2003/Mar/44 month =
mar, year =. (????).
Appendix B: Program listing for the linked-list [20] Hovav Shacham. 2007. The Geometry of Innocent Flesh on the Bone: Return-
variant into-libc Without Function Calls (on the x86). In Proceedings of the 14th ACM
Conference on Computer and Communications Security (CCS ’07). ACM, New York,
REFERENCES NY, USA, 552–561. https://doi.org/10.1145/1315245.1315313
[21] PaX Team. 2003. https://pax.grsecurity.net/docs/aslr.txt. Text File. (March 2003).
[1] Martín Abadi, Mihai Budiu, Úlfar Erlingsson, and Jay Ligatti. 2005. Control- [22] PaX Team. 2015. RAP: RIP ROP. https://pax.grsecurity.net/docs/PaXTeam-
flow Integrity. In Proceedings of the 12th ACM Conference on Computer and H2HC15-RAP-RIP-ROP.pdf. (Oct. 2015).
Communications Security (CCS ’05). ACM, New York, NY, USA, 340–353. https: [23] Rob J. van Glabbeek and W. Peter Weijland. 1996. Branching Time and
//doi.org/10.1145/1102120.1102165 Abstraction in Bisimulation Semantics. J. ACM 43, 3 (May 1996), 555–600.
[2] Clive Blackwell and Hong Zhu (Eds.). 2014. Cyberpatterns, Unifying Design https://doi.org/10.1145/233551.233556
Patterns with Security and Attack Patterns. Springer. https://doi.org/10.1007/ [24] Julien Vanegue. 2014. The Weird Machines in Proof-Carrying Code. 2014 IEEE
978-3-319-04447-7 Security and Privacy Workshops 0 (2014), 209–213. https://doi.org/10.1109/SPW.
2014.37
4 Strong stack cookies are one example of a mitigation that will deterministically detect
a particular class of corruptions if a given program point ρ i is reached.
12
1 . const f i r s t I n d e x 6
2 . const lastIndex 6 + (5000∗2)
3 BasicStateA :
4 READ r0 # Read p
5 READ r1 # Read s
6 CheckForNullSecret :
7 JZ r1 , O u t p u t E r r o r M e s s a g e
8 JZ r0 , O u t p u t E r r o r M e s s a g e
9 CheckForPresenceOfP : # Run t h r o u g h a l l p o s s i b l e a r r a y e n t r i e s .
10 LOAD f i r s t I n d e x , r3
11 LOAD lastIndex , r4
12 CheckForCorrectP :
13 ICOPY r3 , r 5 # Load t h e s t o r e d p o f t h e t u p l e
14 SUB r5 , r0 , r 5 # Subtract the input p
15 JZ r5 , PWasFound
16 ADD r3 , 2 , r 3 # Advance t h e i n d e x i n t o t h e t u p l e a r r a y .
17 SUB r3 , r4 , r 5 # Have we c h e c k e d a l l e l e m e n t s o f t h e a r r a y ?
18 JNZ r5 , C h e c k F o r C o r r e c t P
19 PWasNotFound :
20 LOAD f i r s t I n d e x , r3
21 LOAD lastIndex , r4
22 SearchForEmptySlot :
23 ICOPY r3 , r 5
24 JZ r5 , EmptyFound
25 ADD r3 , 2 , r 3
26 SUB r3 , r4 , r 5
27 JZ r5 , NoEmptyFound
28 J SearchForEmptySlot
29 NoEmptyFound :
30 OutputErrorMessage :
31 SUB r0 , r0 , r 0
32 PRINT r0
33 J BasicStateA
34 EmptyFound :
35 DCOPY r3 , r 0 # W r i t e t h e p a s s w o r d
36 ADD r3 , 1 , r 3 # Adjust the pointer
37 DCOPY r3 , r 1 # W r i t e t h e s e c r e t .
38 J BasicStateA
39 PWasFound :
40 LOAD 0 , r4
41 DCOPY r3 , r 4 # Zero o u t t h e s t o r e d p
42 ADD r3 , 1 , r 3
43 ICOPY r3 , r 5 # Read t h e s t o r e d s
44 PRINT r5
45 J BasicStateA
variant1A.s
[25] David Walker, Lester Mackey, Jay Ligatti, George A. Reis, and David I. August.
2006. Static Typing for a Faulty Lambda Calculus. In Proceedings of the Eleventh
ACM SIGPLAN International Conference on Functional Programming (ICFP ’06).
ACM, New York, NY, USA, 38–49. https://doi.org/10.1145/1159803.1159809
13
1 c o n s t f r e e _ h e a d 5 # Head o f t h e f r e e l i s t .
2 c o n s t u s e d _ h e a d 6 # Head o f t h e u s e d l i s t .
3 J InitializeFreeList
4 BasicStateA :
5 READ r0 # Read p
6 READ r1 # Read s
7 SUB r2 , r2 , r 2 # I n i t i a l i z e a c o u n t e r f o r number o f e l e m e n t s .
8 CheckForNullSecret :
9 JZ r1 , O u t p u t E r r o r M e s s a g e # Zero s e c r e t n o t a l l o w e d .
10 JZ r0 , O u t p u t E r r o r M e s s a g e # Zero p a s s w o r d n o t a l l o w e d .
11 LOAD used_he ad , r 3 # The l i s t c o n s i s t s o f [ p , s , n x t ] t u p l e s .
12 CheckForPresenceOfP :
13 JZ r3 , E n d O f U s e d L i s t F o u n d
14 ICOPY r3 , r 4 # Load ' p ' o f t h e e n t r y .
15 SUB r4 , r0 , r 4 # Compare a g a i n s t t h e p a s s w o r d
16 JZ r4 , PWasFound # E l e m e n t was f o u n d .
17 ADD r3 , 2 , r 3 # Advance t o ' n e x t ' w i t h i n [ p , s , n x t ]
18 ICOPY r3 , r 3 # Load t h e ' n e x t ' p o i n t e r .
19 J r3 , C h e c k F o r P r e s e n c e O f P
20 EndOfUsedListFound :
21 LOAD free_head , r3
22 JZ r3 , O u t p u t E r r o r M e s s a g e # No more f r e e e l e m e n t s a v a i l a b l e ?
23 ICOPY r3 , r 2 # Get t h e f i r s t e l e m e n t from t h e f r e e l i s t
24 DCOPY r2 , r 0 # Write the [ p , ? , ?]
25 ADD r2 , 1 , r 4
26 DCOPY r4 , r 1 # Write the [ p , s , ?]
27 LOAD used_he ad , r 0
28 ICOPY r0 , r 1 # Load u s e d _ h e a d t o p l a c e i t i n ' n e x t '
29 DCOPY r0 , r 2 # R e w r i t e u s e d _ h e a d t o p o i n t t o new e l e m e n t
30 ADD r2 , 2 , r 4 # P o i n t to ' next ' f i e l d
31 ICOPY r4 , r 2 # Load t h e p t r t o t h e n e x t f r e e e l e m e n t i n t o r 2
32 DCOPY r4 , r 1 # Write the [ p , s , next ]
33 DCOPY r3 , r 2 # W r i t e t h e f r e e _ h e a d −> n e x t f r e e e l e m e n t
34 J BasicStateA
35 PWasFound :
36 ADD r3 , 1 , r 2
37 ICOPY r2 , r 1 # Load t h e s t o r e d s e c r e t .
38 PRINT r 1 # Output t h e s e c r e t .
39 ADD r3 , 2 , r 2 # P o i n t r 2 t o t h e n e x t f i e l d .
40 LOAD free_head , r1
41 ICOPY r1 , r 0 # Read t h e c u r r e n t p o i n t e r t o t h e f r e e l i s t .
42 DCOPY r2 , r 1 # P o i n t n e x t p t r o f c u r r e n t t r i p l e t o f r e e l i s t .
43 DCOPY r1 , r 3 # P o i n t f r e e − head t o c u r r e n t t r i p l e .
44 J BasicStateA
45 InitializeFreeList :
46 LOAD free_head , r0
47 LoopToInitialize :
48 ADD r0 , 3 , r 1 # Advance t o t h e n e x t e l e m e n t .
49 ADD r0 , 2 , r 0 # Advance t o t h e n e x t p o i n t e r i n s i d e .
50 DCOPY r0 , r 1 # Write the next p o i n t e r .
51 ADD r1 , 0 , r 0 # Set current e l t = next element .
52 SUB r0 , 5 0 0 0 ∗ 3 + 7 , r 2 # Have we i n i t i a l i z e d enough ?
53 JNZ r2 , L o o p T o I n i t i a l i z e
54 TerminateFreeList :
55 SUB r0 , 1 , r 0
56 DCOPY r0 , r 2 # S e t the l a s t next −p o i n t e r 0 to t e r m i n a t e
57 # the f r e e l i s t .
58 WriteInitialFreeHead :
59 LOAD used_head +1 , r0
60 LOAD free_head , r1
61 DCOPY r1 , r 0 # S e t t h e f r e e − head t o p o i n t t o t h e f i r s t t r i p l e .
62 J BasicStateA
63 OutputErrorMessage :
64 SUB r0 , r0 , r 0
65 PRINT r 0
66 J BasiceStateA
variant2A.s
14