Main_vm – is instruction handler. VM circuit only accumulated memory queries using WITNESS provided by (presumably honest) prover. In this sense VM is “local” - it doesn’t have access to full memory space, but only to values of particular queries that it encountered during the execution. RAM circuit sorts all accumulated queries from VM and ENFORCES the general RAM validity as described above. Those two actions together guarantee RAM validity, so for all the descriptions below when we will talk about particular opcodes in VM we will use a language like “Operand number 0 is read from the stack at the offset X” that means that even though such “memory read” technically means using a witness provided by the prover, in practice we can assume that such witness is correct and we can view it as just normal RAM access as one would expect to happen on the standard machine.
but depending from start_flag we should select between states:
letmut state =VmLocalState::conditionally_select(cs, start_flag, &bootloader_state, &hidden_fsm_input);let synchronized_oracle =SynchronizedWitnessOracle::new(witness_oracle);
Here we run the vm_cycle :
for _cycle_idx in0..limit { state =vm_cycle( cs, state,&synchronized_oracle,&per_block_context, round_function, ); }
The VM runs in cycles. For each cycle,
Start in a prestate - perform all common operations for every opcode, namely deal with exceptions, resources, edge cases like end of execution, select opcodes, compute common values. Within the zkEVM framework, numerous entities identified as "opcodes" in the EVM paradigm are elegantly manifested as mere function calls. This modification is rooted in the succinct observation that, from the perspective of an external caller, an inlined function (analogous to an opcode) is inherently indistinguishable from an internal function call.
let (draft_next_state, common_opcode_state, opcode_carry_parts) =create_prestate(cs, current_state, witness_oracle, round_function);
Compute state diffs for every opcode. List of opcodes:
VM cycle calls such functions for different class of opcodes: nop, add_sup, jump, bind, context, ptr, log, calls_and_ret, mul_div.
Here we briefly mention all opcodes defined in the system. Each logical "opcode" comes with modifiers, categorized into "exclusive" modifiers (where only one can be applied) and "flags" or "non-exclusive" modifiers (where multiple can be activated simultaneously). The number of permissible "flags" can vary depending on the specific "exclusive" modifier chosen. All data from opcodes we write to StateDiffsAccumulator:
pubstructStateDiffsAccumulator<F:SmallField> {// dst0 candidatespub dst_0_values:Vec<(bool, Boolean<F>, VMRegister<F>)>,// dst1 candidatespub dst_1_values:Vec<(Boolean<F>, VMRegister<F>)>,// flags candidatespub flags:Vec<(Boolean<F>, ArithmeticFlagsPort<F>)>,// specific register updatespub specific_registers_updates: [Vec<(Boolean<F>, VMRegister<F>)>; REGISTERS_COUNT],// zero out specific registerspub specific_registers_zeroing: [Vec<Boolean<F>>; REGISTERS_COUNT],// remove ptr markers on specific registerspub remove_ptr_on_specific_registers: [Vec<Boolean<F>>; REGISTERS_COUNT],// pending exceptions, to be resolved next cycle. Should be masked by opcode applicability alreadypub pending_exceptions:Vec<Boolean<F>>,// ergs left, PC// new ergs left if it's not one available after decodingpub new_ergs_left_candidates:Vec<(Boolean<F>, UInt32<F>)>,// new PC in case if it's not just PC+1pub new_pc_candidates:Vec<(Boolean<F>, UInt16<F>)>,// other meta parameters of VMpub new_tx_number:Option<(Boolean<F>, UInt32<F>)>,pub new_ergs_per_pubdata:Option<(Boolean<F>, UInt32<F>)>,// memory boundspub new_heap_bounds:Vec<(Boolean<F>, UInt32<F>)>,pub new_aux_heap_bounds:Vec<(Boolean<F>, UInt32<F>)>,// u128 special register, one from context, another from call/retpub context_u128_candidates:Vec<(Boolean<F>, [UInt32<F>; 4])>,// internal machinerypub callstacks:Vec<(Boolean<F>, Callstack<F>)>,// memory page counterpub memory_page_counters:Option<UInt32<F>>,// decommittment queuepub decommitment_queue_candidates:Option<(Boolean<F>,UInt32<F>, [Num<F>; FULL_SPONGE_QUEUE_STATE_WIDTH], )>,// memory queuepub memory_queue_candidates:Vec<(Boolean<F>,UInt32<F>, [Num<F>; FULL_SPONGE_QUEUE_STATE_WIDTH], )>,// forward piece of log queuepub log_queue_forward_candidates:Vec<(Boolean<F>, UInt32<F>, [Num<F>; QUEUE_STATE_WIDTH])>,// rollback piece of log queuepub log_queue_rollback_candidates:Vec<(Boolean<F>, UInt32<F>, [Num<F>; QUEUE_STATE_WIDTH])>,// sponges to run. Should not include common sponges for src/dst operandspub sponge_candidates_to_run:Vec<(bool,bool,Boolean<F>,ArrayVec< (Boolean<F>, [Num<F>; FULL_SPONGE_QUEUE_STATE_WIDTH], [Num<F>; FULL_SPONGE_QUEUE_STATE_WIDTH], ),MAX_SPONGES_PER_CYCLE, >, )>,// add/sub relations to enforcepub add_sub_relations:Vec<(Boolean<F>,ArrayVec<AddSubRelation<F>, MAX_ADD_SUB_RELATIONS_PER_CYCLE>, )>,// mul/div relations to enforcepub mul_div_relations:Vec<(Boolean<F>,ArrayVec<MulDivRelation<F>, MAX_MUL_DIV_RELATIONS_PER_CYCLE>, )>,}
There will be no implementation details here because the code is commented step by step and is understandable. Short description:
Apply opcodes, for DST0 it's possible to have opcode-constrainted updates only into registers, apply StateDiffsAccumulator, update the memory, update the registers, apply changes to VM state, such as ergs left, etc. push data to queues for other circuits. If an event has rollback then create the same event data but with rollback flag, enforce sponges. There are only 2 outcomes:
we have dst0 write (and may be src0 read), that we taken care above
opcode itself modified memory queue, based on outcome of src0 read in parallel opcodes either
do not use sponges and only rely on src0/dst0
can not have src0/dst0 in memory, but use sponges (UMA, near_call, far call, ret)
No longer in the cyclical part VM we Setup different queues:
let final_log_state_tail = final_state.callstack.current_context.log_queue_forward_tail;let final_log_state_length = final_state.callstack.current_context.log_queue_forward_part_length;// but we CAN still check that it's potentially mergeable, basically to check that witness generation is goodfor (a, b) in final_log_state_tail.iter().zip( final_state.callstack.current_context.saved_context.reverted_queue_head.iter(),) {Num::conditionally_enforce_equal(cs, structured_input.completion_flag, a, b);}let full_empty_state_small =QueueState::<F, QUEUE_STATE_WIDTH>::empty(cs);let log_queue_current_tail =QueueTailState { tail: final_log_state_tail, length: final_log_state_length,};let log_queue_final_tail =QueueTailState::conditionally_select( cs, structured_input.completion_flag,&log_queue_current_tail,&full_empty_state_small.tail,);