[mono-vm][specializer] Remove explicit types from simulated operand stack#19466
[mono-vm][specializer] Remove explicit types from simulated operand stack#19466
Conversation
There was a problem hiding this comment.
Pull request overview
This PR simplifies the intra-block SSA conversion in the Move specializer by removing explicit Type annotations from the simulated operand stack and instead deriving types from the vid_types table when needed.
Changes:
- Replace the operand stack representation from
Vec<(Slot, Type)>toVec<Slot>. - Introduce
vid_type(...)to recover aVid’sTypefromvid_typesfor ops that need type-derived behavior. - Update bytecode conversion logic to push/pop plain
Slots and compute derived unary/binary result types viavid_type.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| /// Invariant: only Vid slots appear on the operand stack. | ||
| fn vid_type(&self, slot: Slot) -> Result<Type> { | ||
| match slot { | ||
| Slot::Vid(id) => Ok(self.vid_types[id as usize].clone()), |
There was a problem hiding this comment.
Differential Security Review — PR #19466: Remove explicit types from simulated operand stack
Scope: third_party/move/mono-move/specializer/src/destack/ssa_conversion.rs — 1 file, +236/−251 lines
Strategy: Full trace (single file, medium complexity)
Module: mono-move/specializer — compile-time bytecode-to-SSA conversion pass; runs on verified Move bytecode before execution, not on a hot consensus or execution path
Executive Summary
| Severity | Count |
|---|---|
| CRITICAL | 0 |
| HIGH | 0 |
| MEDIUM | 1 |
| LOW | 1 |
Overall risk: Low. The refactor is internally consistent and safe for verified bytecode inputs. The single structural observation is that vid_types[id as usize] performs an unchecked slice index in release builds on a path guarded only by a debug_assert; while exploitability requires unverified bytecode to reach this pass, the indexing is worth hardening. There is also a test coverage gap for the two opcodes (Negate, FreezeRef) whose dispatch logic changed structurally.
Recommendation: APPROVE WITH NOTES
What Changed
Commits: 7700162d..01d72ced (1 commit)
Files changed: 1 Rust file | Lines: +236 / −251
| Module | Files Changed | Risk Level |
|---|---|---|
third_party/move/mono-move/specializer/src/destack/ |
1 | Medium |
The core change is replacing the simulated operand stack from Vec<(Slot, Type)> to Vec<Slot>. Type information is no longer bundled with each stack entry; instead it is retrieved on-demand from vid_types: Vec<Type> (indexed by Slot::Vid(id)) via the new vid_type() method. This is a correct simplification because every Vid on the stack was already registered in vid_types at allocation time.
Findings
[MEDIUM] vid_types[id as usize] — unchecked index, debug_assert is the only enforcement in release
File: third_party/move/mono-move/specializer/src/destack/ssa_conversion.rs
Blast radius: Internal to the specializer compile-time pass; not a runtime validator or consensus path
Test coverage: Partially tested (tested indirectly via all stack-using opcodes, but not specifically for out-of-bounds resilience)
Description: The new vid_type() helper indexes vid_types without a bounds check:
fn vid_type(&self, slot: Slot) -> Result<Type> {
match slot {
Slot::Vid(id) => Ok(self.vid_types[id as usize].clone()),
other => bail!("expected Vid slot on operand stack, got {:?}", other),
}
}The only enforcement that a Slot::Vid on the stack was produced by this converter's alloc_vid — and therefore has a corresponding entry in vid_types — is the debug_assert in push_slot:
fn push_slot(&mut self, r: Slot) {
debug_assert!(r.is_vid(), "only Vid slots belong on the operand stack");
self.stack.push(r);
}In a release build the debug_assert is compiled out. If a Slot::Vid(id) were placed on the stack by any path that bypasses push_slot, or if a Vid with an id beyond vid_types.len() were ever constructed (e.g., by a future caller misusing a stale Slot from a recycled converter), vid_types[id as usize] would panic.
Concrete impact: For inputs that reach this pass in normal usage, the invariant holds — every slot pushed is produced by alloc_vid which appends to vid_types before returning the Vid. No current exploit path exists. The concern is future maintenance: the bounds check is omitted where a bail! would suffice and would be robust against future callers.
Note: This is a test-gap/hardening observation, not a current bug.
[LOW] No test coverage for Negate or FreezeRef opcodes after structural dispatch change
File: third_party/move/mono-move/specializer/src/destack/ssa_conversion.rs
Test coverage: Untested (no .masm or .move test case exercises either opcode)
Description: Negate and FreezeRef are the two opcodes whose dispatch logic changed structurally in this PR. Previously both went through convert_unop(..., None) and the type was derived inside that function. Now each has an explicit pre-pop peek of stack.last() to determine the result type before calling convert_unop:
B::Negate => {
let src_ty = self.vid_type(*self.stack.last().context("stack underflow")?)?;
self.convert_unop(UnaryOp::Negate, src_ty)?;
},
B::FreezeRef => {
let src_ty = self.vid_type(*self.stack.last().context("stack underflow")?)?;
let result_ty = match src_ty {
Type::MutableReference(inner) => Type::Reference(inner),
other => bail!("FreezeRef on non-mutable-reference type {:?}", other),
};
self.convert_unop(UnaryOp::FreezeRef, result_ty)?;
},A search of tests/test_cases/ finds no .masm or .move file using freeze_ref, FreezeRef, negate, or Negate. The peek-then-pop pattern is correct (stack.last() and pop_slot() in convert_unop operate on the same top element with no intervening mutation), and the stricter FreezeRef check aligns with what the Move bytecode verifier enforces. The gap is the absence of a regression test for these paths — and closure opcodes (PackClosure, CallClosure, etc.) are also unexercised in the test suite.
Concrete impact: Test gap, not a code bug. The logic is sound for verified bytecode.
Test Coverage Summary
| Changed Code Area | Coverage | Risk Elevation | Recommendation |
|---|---|---|---|
push_slot / pop_slot / pop_n_reverse |
Well-tested (via all stack-using opcodes) | — | — |
vid_type() indexing |
Partially tested (exercises path, not bounds-failure) | Low | Consider get() + bail! over direct index |
Negate / FreezeRef dispatch |
Untested | Low | Add .masm test cases |
PackClosure / CallClosure |
Untested | Low | Add .masm test cases |
Binary ops result type via vid_type(lhs) |
Partially tested | — | Exercised by arithmetic tests |
Blast Radius
The changed code is internal to the destack compile-time pass. It operates on CompiledModule inputs after the Move bytecode verifier has run. Its outputs feed into the subsequent lower / specialize pipeline stages.
| Changed Function | Non-test callers | Classification |
|---|---|---|
vid_type() (new) |
All convert_binop, Negate, FreezeRef call sites |
LOW — single crate, compile-time only |
push_slot / pop_slot |
All ~50 bytecode arms in convert_bytecode |
LOW — contained blast, single module |
No shared constants, public API surface, or cross-crate types changed.
Historical Context
The removed TODO comment (// [TODO] check if we need to have types in the simulated stack) confirms this is the intended resolution of a pre-existing design question. No code removed in this PR traces to any prior security-fix or bug-fix commit in the file's history.
Recommendations
Before Merge (Non-blocking)
- Consider replacing
self.vid_types[id as usize]withself.vid_types.get(id as usize).cloned().context("Vid id out of range")invid_type()to make the invariant hold in release builds without panicking, converting a potential future panic into a propagatedResult::Err.
Technical Debt
- Add
.masmtest cases forFreezeRef,Negate, and at least one closure opcode (PackClosure/CallClosure) to give the new dispatch logic explicit regression coverage.
Sent by Cursor Automation: Security Review Bot
01d72ce to
56e77b0
Compare
7700162 to
bf4b0ba
Compare
56e77b0 to
4439ed2
Compare
3a2d33e to
796284c
Compare
bf4b0ba to
6cb09db
Compare
…are available elsewhere.
796284c to
6223347
Compare
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
✅ Forge suite
|
✅ Forge suite
|



Description
For partial SSA conversion, we do not need to have each element on the simulated operand stack be explicitly typed, because this info is available elsewhere.
Note
Medium Risk
Touches core bytecode-to-SSA conversion and changes how result types are determined for several ops, which could affect IR typing/correctness if any stack/type invariants are violated.
Overview
Removes explicit
(Slot, Type)tracking from the SSA converter’s simulated operand stack, storing onlyVidslots and looking up types via a newvid_type()helper backed byvid_types.Updates bytecode conversion to use the untyped stack throughout and adjusts type derivation logic for ops that depended on stack-carried types (notably
convert_binop,ReadRef, and derived-type unops likeNegate/FreezeRef), including a debug assertion that onlyVidvalues appear on the operand stack.Reviewed by Cursor Bugbot for commit 6223347. Bugbot is set up for automated code reviews on this repo. Configure here.