Skip to content

Decode/Encode to the next level#152

Open
DZakh wants to merge 343 commits intomainfrom
dz/decode-encode
Open

Decode/Encode to the next level#152
DZakh wants to merge 343 commits intomainfrom
dz/decode-encode

Conversation

@DZakh
Copy link
Copy Markdown
Owner

@DZakh DZakh commented Aug 29, 2025

No description provided.

claude and others added 27 commits February 1, 2026 16:42
- Fix toExpression to return inner element type for forward compactColumns schemas
- Fix toExpression to return object expression + "[]" for reversed compactColumns schemas
- Add compactColumns export to S.js for JavaScript/TypeScript users
- Update S.d.ts to use ArrayFormat type for array schemas
- Note: Field-level transformations (nullAsOption, bigint) are not applied in
  compactColumns - it creates raw objects without field transformations

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
* Fix hasTransform and isAsync for recursive schemas

- Add hasTransform field to schema type (parallel to isAsync)
- Cache hasTransform in compileDecoder alongside isAsync
- Refactor recursiveDecoder to use optimistic compilation:
  - Start with optimistic assumptions (hasTransform=false, isAsync=false)
  - Set values on def before compiling for inner circular refs to read
  - After compilation, check if actual values match assumptions
  - If mismatched, clear cache and recompile with correct values
- Remove messy isAsyncInternal workaround and FIXME comment
- Update JSON parse code test expectation (no more array/object recreation)

This fixes the TODO about not needing to recreate array/json values.
Recursive schemas without transforms now generate cleaner code.

https://claude.ai/code/session_01J6u8Aw2p2PGXbZUMUEScK1

* Update compiled Sury.res.mjs

https://claude.ai/code/session_01J6u8Aw2p2PGXbZUMUEScK1

* Fix embed of temporary decoder

* Improve recursive decoder

* Final test fixes for recursive improvement

---------

Co-authored-by: Claude <noreply@anthropic.com>
- Fix S.to to store properties directly on compactColumns schema instead
  of chaining via .to, which prevents objectSchema decoder from running
  in the reversed direction
- Fix compactColumns encoding for nullAsOption fields by properly
  detecting direction using selfSchema.parser presence
- Add CompactColumnsSchema type and S.to overload for proper TypeScript
  type inference with compactColumns
- Fix test argument order in edge case tests
- Update test expectations for non-object schema error message

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
- Use default S.to logic instead of special-casing compactColumns
- Add skipTo handling in compactColumnsDecoder for both forward and reverse
- Add check in objectDecoder to skip when .to is compactColumns (reversed chain)
- Fix direction detection using selfSchema.to presence
- Update toExpression to show proper array types for compactColumns
- Fix toExpression for reversed compactColumns to show []
- Keep CompactColumnsSchema type and S.to overload for proper type inference

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
* Decouple refiner from decoder by storing in refiner field

Instead of baking refiner logic into the decoder via appendRefiner,
store the refiner function in the dedicated `refiner` field on the
internal schema type. The parse function already calls `expected.refiner`
separately after the decoder, so this change requires no parse logic
modifications. Chaining of multiple refiners is handled by composing
the builder functions in the refiner field.

https://claude.ai/code/session_01LPsNZxPJdRczdL94YEHccc

* Change refiner field type from builder to code-returning function

Refiner no longer mutates the val inside itself. Instead of using the
builder type (val => val), the refiner and inputRefiner fields now use
(~input: val, ~selfSchema: internal) => string - the same type that
internalRefine accepts. The parse function calls the refiner and appends
the returned code string to codeAfterValidation. Multiple refiners are
composed by concatenating their code strings.

https://claude.ai/code/session_01LPsNZxPJdRczdL94YEHccc

* Remove ~selfSchema from refiner type and clean up internalRefine

The selfSchema param was unused by all refiners (always ~selfSchema as _)
since it's available via input.expected. Simplified the refiner type to
(~input: val) => string. Moved the switch outside of mut.refiner
assignment in internalRefine for cleaner control flow.

https://claude.ai/code/session_01LPsNZxPJdRczdL94YEHccc

* Rename refiner arg to makeRefiner and refinerCode to refiner

https://claude.ai/code/session_01LPsNZxPJdRczdL94YEHccc

* Remove unused appendRefiner fn

---------

Co-authored-by: Claude <noreply@anthropic.com>
)

* Update test assertions for logically correct compiled code snapshot changes

8 snapshot assertions updated across 6 test files to match the new
compiled output from the val architecture refactoring:
- S_test.ts: Fix error message for recursive self-as-transform-target
- S_refine_test.res: Update parse code snapshot (object construction before validation)
- S_object_test.res: Update parse + reverse convert snapshots
- Example_test.res: Update serialize code snapshot (unused variable read)
- S_recursive_test.res: Update 2 ParseAsync snapshots (new recursive ref format)
- S_union_test.res: Update json-rpc ReverseConvert (3x→2x union duplication)

https://claude.ai/code/session_01Uax71CbmZTwTgfRjWVgcLu

* Revert S_test.ts message change (was correct before) and add FIXME for union v5 double conversion

The "Recursive with self as transform target" test correctly expects
"from string to Node[]" — the library now incorrectly says "from Node".
Added FIXME noting v5 is still converted twice in json-rpc ReverseConvert.

https://claude.ai/code/session_01Uax71CbmZTwTgfRjWVgcLu

* Update json-rpc ReverseConvert assertion to expect single v5 conversion

The nested error union should only need one pass to convert v5, not two.
The test will fail until the library is fixed.

https://claude.ai/code/session_01Uax71CbmZTwTgfRjWVgcLu

---------

Co-authored-by: Claude <noreply@anthropic.com>
- Add objectDecoder skip logic for reversed compactColumns chain
- Restore direction detection in compactColumnsDecoder
- Add compactColumns and reverseConvertOrThrow to S.js exports
- Add reverseConvertOrThrow type to S.d.ts
- Update TypeScript tests to use parser and reverseConvertOrThrow

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
Instead of hardcoding selfReverse=false, check if the item schema is
self-reversing by comparing Dict.getUnsafe(reversedKey) === itemInternal.
Also rename reverseKey to reversedKey.

https://claude.ai/code/session_01FSGU4DhYDjWQu2n3bE3FQh

Co-authored-by: Claude <noreply@anthropic.com>
- Add compactColumns export to Pack.res for TypeScript/JS usage
- Update S.d.ts: remove reverseConvertOrThrow (use S.encoder), fix compactColumns type to Output[][]
- Fix toExpression for compactColumns without S.to to show inner schema type (e.g., "string[][]")
- Update S_test.ts to use S.encoder instead of reverseConvertOrThrow
- Add test for compactColumns toExpression without S.to

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
- Add Alpha.6 section documenting compactColumns feature
- Remove accidentally committed genType generated files

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
- Fix compactColumnsDecoder direction detection to be based on input
  type rather than selfSchema.to
- Fix compactColumnsEncoder to pass through when input is already
  in columnar format (happens when called as parser after reverse)
- Add U.assertCompiledCode checks to all compactColumns test cases
- Use S.untag instead of Obj.magic to access format field in tests

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
- Merge Alpha.6 into Alpha.5 in IDEAS.md
- Support empty objects in compactColumns (parse empty columnar to empty array)
- Improve error message for non-object schemas to:
  "S.compactColumns supports only object schemas. Use S.compactColumns(S.unknown)->S.to(objectSchema)."
- Update tests to reflect new behavior

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
…nto claude/replace-unnest-compactcolumns-Zs0Za
* Change S.refine JS API from callback-based to boolean-returning signature

S.refine now takes (value) => boolean check function with optional
{ error, path } options instead of (value, s) => void with s.fail().

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Update JS docs and add ideas.md with changelog for S.refine change

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Inline js_refine check via internalRefine, remove tracked S_test.js, update IDEAS.md

- Refactored js_refine to use internalRefine directly, generating inline
  validation code instead of going through effectCtx
- Removed S_test.js from git (generated file, already gitignored)
- Moved changelog entry from ideas.md to existing IDEAS.md

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Add test for S.refine with path option

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Improve S.refine docs with examples following Zod-style documentation

Added sections for custom error messages, custom error paths, and
chaining refinements with clear code examples.

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Rename S.asyncParserRefine to S.asyncDecoderAssert

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Remove EffectCtx from asyncDecoderAssert in favor of throwing directly

- Renamed refiner param to assertFn, removed s (EffectCtx) argument
- Removed EffectCtx type (no longer used anywhere)
- Restored comment on .with() overload
- Updated tests, docs, and changelog

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Fix async docs wording and simplify async test assertions

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Wrap asyncDecoderAssert errors as custom SuryError

Thrown errors are caught and re-thrown as SuryError with code "custom",
preserving the original error message. Already-SuryErrors are re-thrown
as-is.

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

* Revert asyncDecoderAssert custom error wrapping, keep it simple

Let transform handle error wrapping naturally.

https://claude.ai/code/session_013mvoLw6QfNz5DJAUL1jV7W

---------

Co-authored-by: Claude <noreply@anthropic.com>
…lAsOption fields

- Forward direction (parse): Convert null to undefined for fields with null variant
- Reverse direction (encode): Detect undefined->null transformation by checking
  if field schema's anyOf contains an undefined variant with .to pointing to null
- Update tests to expect null<->undefined transformation in compiled code
- Fix TypeScript test to expect undefined output when parsing nullable fields

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
The compactColumns transformation produces/consumes an array of objects,
so the target schema must be S.array(objectSchema) not just objectSchema.

Changes:
- Update compactColumnsDecoder to extract properties from selfSchema.to.additionalItems
- Update TypeScript tests to use S.array(S.schema({...}))
- Update ReScript tests to use S.array(S.schema(...))
- Update error message to reflect the new API format
- Update compiled code expectations for the array wrapper validation

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
* Update S.refine ReScript API from callback-based to boolean-returning

Changed the public ReScript API for S.refine from the old callback-based
style `(t<'value>, s<'value> => 'value => unit) => t<'value>` to the new
boolean-returning style `(t<'value>, 'value => bool, ~error: string=?,
~path: array<string>=?) => t<'value>`.

This aligns with the TypeScript API change in d2d8dd4, using labeled
arguments (~error, ~path) instead of a JS-style options object.

Updated: S.resi, Sury.resi, Sury.res implementation, IDEAS.md, docs,
and all test files using the refine API. Also updated compiled code
snapshots to match actual output.

https://claude.ai/code/session_01FL2DmWjTLoMgTvdE43KG9z

* Revert assertCompiledCode expectations to document correct behavior

Keep the original expected compiled code in tests where the actual
generated code is broken (pre-existing bug with refine on object schemas
and union schemas). The tests fail, documenting what the code generator
should produce once the underlying bugs are fixed.

https://claude.ai/code/session_01FL2DmWjTLoMgTvdE43KG9z

---------

Co-authored-by: Claude <noreply@anthropic.com>
- Fix Example_test to use reverseConvertOrThrow instead of reverseConvertToJsonOrThrow
  (optional fields with undefined can't be converted to JSON)
- Fix S_null_test to expect throw for reverseConvertToJsonStringOrThrow with complex
  optional nullable fields
- Update S_null_test compiled code snapshot for new validation patterns
- Update S_refine_test compiled code snapshots for new refine compilation
- Fix bug in getShapedSerializerOutput where flattenedOutput.vals could be None

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
- Remove propsToUse logic in toExpression for compactColumns (compactColumns never has its own properties)
- Reuse array expression logic for compactColumns without S.to
- Remove Object with CompactColumns toExpression case
- Remove shouldSkipForCompactColumns logic from objectDecoder (only compactColumns decoder should handle this)
- Revert unrelated flatten fix
- Change mut.serializer to mut.encoder for compactColumns
- Update S_toExpression_test.res to use S.array wrapper for reversed compactColumns test

https://claude.ai/code/session_01SKZEuGXnrhtTCktzecM4WE
claude and others added 17 commits April 16, 2026 13:29
…t schema type

The error path in getShapedSerializerOutput was reversing targetSchema
before displaying its expression, but targetSchema includes a .to chain
appended by getDecoder2 (e.g. .to=Some(S.unknown)). Reversing the full
chain put "unknown" at the head, producing "Missing input for unknown"
instead of "Missing input for string". Removing the unnecessary ->reverse
fixes the display since toExpression already returns the correct type name
from the schema's tag.

https://claude.ai/code/session_01ARzqrRDCKosCbo2QiiVHjr
B.refine sets val.schema to the narrowed type (e.g. string) while
simultaneously attaching checks that validate the narrowing. When a
check fails, failInvalidType and embedInvalidInput read val.schema as
"received", producing received === expected. Use val.prev.schema
instead, which is the actual runtime type before narrowing.

Adds 9 tests covering unknown-to-X mismatches (string, float, bool,
object, nested field) and typed-input cases (float→int32 format,
string→number NaN, string→literal). All fail without the fix.

https://claude.ai/code/session_01HzM7HfHaGJa4p44ZBe4b8r
Fix received schema in error messages for type mismatches
Remove unnecessary reverse call in error message generation
The merge() function deletes the .allocate (.a) property from vals during
code generation. In recursiveDecoder, mergeWithPathPrepend is called on the
output val to compute path-prepend code, which destroys .allocate. When this
val is subsequently used as input to a transform parser (via embedTransformation),
the missing .allocate causes "TypeError: input.a is not a function".

Restore .allocate = initialAllocate after the mergeWithPathPrepend call so
the output val remains usable as input to downstream builders.

https://claude.ai/code/session_011o78ugN3wrSs59TQiCHQtK
…m test

The template literal in the Parse snapshot had 2 spaces of source indentation
leaking into the expected string before "Node:". The ReverseConvert snapshot
on the adjacent line was already correct (no leading spaces).

https://claude.ai/code/session_011o78ugN3wrSs59TQiCHQtK
The auto-detect in getCompiledCodeString can't compile the Node def for
ReverseConvert (reversing the recursive def throws), so the Node block was
silently omitted from actual output while the expected snapshot still
included it. Pass ~embedded=[] to explicitly skip the embedded block,
matching the actual behavior.

https://claude.ai/code/session_011o78ugN3wrSs59TQiCHQtK
…snapshot

The auto-detect in getCompiledCodeString can't compile the Node def for
ReverseConvert (reversing the recursive def throws). Use ~embedded=[("Node", 2)]
to explicitly select the embedded function, and compute the recursive cache
key dynamically from S.reverse(nodeSchema) defs to avoid hardcoding seq numbers.

https://claude.ai/code/session_011o78ugN3wrSs59TQiCHQtK
Fix recursive schema reuse in transform chains
Exposes const on the Object and Array schema variants in the public API,
matching the pattern of other schema types. Also handles const in JSON
schema output for both array and object schemas.

https://claude.ai/code/session_011mb3cK6Gj2iMici8FGBmS7
Clean up whitespace and remove outdated TODO comment
- Update "Successfully serializes recursive object" snapshot to the
  current codegen output (separated `let v0;` + assignment); switch to
  ~embedded=[("Node", 0)] with a dynamically computed recKey since the
  auto-detect path can't compile the reversed Node def.
- Replace the ReScript-struct literal passed to assertThrowsMessage in
  "Fails to serialise nested recursive object" with the actual error
  string.
- Update the Parse and ReverseConvert snapshots in "Recursively
  transforms all objects when added transform to the recursive's
  function returned schema" to match the current codegen style
  (typeof-||-throw checks, wrapped transform in try/catch, and the new
  embed key format).

https://claude.ai/code/session_01BJsvHrEpohu52TgoWywggM
Refresh stale recursive test snapshots and fix mis-shaped assertion
When an object field's async val comes from asyncVal (e.g. the wrapping
Promise.all(<arrayVar>) produced by arrayDecoder for arrays of async
items), its .inline is a non-variable expression. completeObjectVal uses
that inline string both as Promise.all argument and as the destructuring
pattern of the .then callback, producing invalid destructuring like
([v0, Promise.all(v6),])=>{...}. That crashes new Function() with
"Invalid destructuring assignment target".

Fix at the Val.Object.add site: when adding an async val to an object,
force val.var() so val.inline is always a plain identifier before
completeObjectVal consumes it. This mirrors how embedTransformation
already allocates its own outputVar. Top-level arrays-of-async are
unaffected because they never pass through Val.Object.add.

Update the two recursive async snapshots and refresh the failing-tests
baseline from 26 down to 17 (the two async recursive tests now pass, no
new regressions).

https://claude.ai/code/session_01BJsvHrEpohu52TgoWywggM
The fix in B.Val.Object.add materializes async field vals as plain
identifiers before val->merge is called. It can't move into
completeObjectVal because B.merge deletes .allocate from every walked
prev (line 1512: delete val$1.a), so calling val.var() afterward hits
"target.a is not a function" and nothing gets allocated into the
already-emitted code.

https://claude.ai/code/session_01BJsvHrEpohu52TgoWywggM
Fix async field handling in object parsing
claude and others added 12 commits April 19, 2026 13:40
Replace removed helpers in the e2e harness and PPX tests:
- S.compile(~input,~output,~mode,~typeValidation) -> S.decoder/asyncDecoder(~from,~to)
- S.ErrorClass.constructor(~code,~flag,~path) -> S.Error.make(errorDetails)
- exception S.Error -> S.Exn
- Drop U.error/taggedFlag/errorPayload; pass errorDetails directly.

Update PPX tests that used format functions (S.string->S.url/S.email) since
those are now constant schemas. Substitute S.meta(...) to keep the same
pipeline shape. Comment out @s.null tests which still depend on stale
sury-ppx output (S.null returns null<'a>, not option<'a>).

https://claude.ai/code/session_017Zo1Q5UjbU7VZ76SvNK2eB
- Replace S.meta workaround with direct S.url / S.email refinement
  schemas (now exposed as t<string> rather than refinement functions),
  calling S.enableUrl/S.enableEmail before use.
- Add a local S.gen.ts shim so the relative `./S.gen` import that
  genType now emits in GenType.gen.ts resolves to sury/src/S.gen.

https://claude.ai/code/session_017Zo1Q5UjbU7VZ76SvNK2eB
Bump `rescript` and `@rescript/darwin-arm64` from 12.0.0-beta.5 to
12.2.0 across sury and e2e. ReScript 12.2.0 moves the stdlib imports
to `@rescript/runtime`, which regenerates the committed `.res.mjs`
files accordingly.

Note: 12.2.0 still emits a relative `./S.gen` import in
GenType.gen.ts for workspace (file:) dependencies instead of the
package-qualified `sury/src/S.gen`, so the S.gen.ts shim is kept
(with a comment pointing at the upstream behavior).

https://claude.ai/code/session_017Zo1Q5UjbU7VZ76SvNK2eB
12.2.0 didn't resolve the genType ./S.gen import path issue and pulls
in the new @rescript/runtime stdlib, which regenerates every committed
.res.mjs. Revert both `rescript` and `@rescript/darwin-arm64` back to
12.0.0-beta.5 and regenerate the .res.mjs files. The S.gen.ts shim
stays in place as the workaround until the genType behavior is fixed
upstream.

https://claude.ai/code/session_017Zo1Q5UjbU7VZ76SvNK2eB
The @s.null blocks were commented out under the assumption that the
sury-ppx output needed to be updated. It doesn't — the PPX already
emits S.nullAsOption for @s.null (see
packages/sury-ppx/src/ppx/Structure.ml:147-152). The actual failure
was that `type null = ...` shadows Stdlib.null, which breaks
subsequent @s.null @s.default tests in the same file. Rename the
type to nullType, drop the "@s.matches(S.url)" shim that had to use
S.meta as a workaround, and re-enable both tests.

Also update the S.gen.ts shim banner to FIXME and link the upstream
issue: rescript-lang/rescript#8375

https://claude.ai/code/session_017Zo1Q5UjbU7VZ76SvNK2eB
The rename was a workaround against the stale sury-ppx@11.0.0-alpha.2
npm binary, whose @s.null codegen still emits S.null (old API). The
current PPX source already emits S.nullAsOption, so `type null =
@s.null option<string>` compiles and tests pass when built from
packages/sury-ppx/src.

https://claude.ai/code/session_017Zo1Q5UjbU7VZ76SvNK2eB
Update API calls to match new Sury schema compilation interface
Captures the three-tier dispatch rules (same-tag group, nullish bridge,
fallback) agreed for union-to-union coercion, with worked examples for
the bigint|float|null ↔ string|undefined case.

https://claude.ai/code/session_01T4TiVLQPnSf9tcpCbpvDWf
The three-tier algorithm applies to any source schema; a union source
just re-runs it per variant.

https://claude.ai/code/session_01T4TiVLQPnSf9tcpCbpvDWf
Document union coercion algorithm and matching strategy
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants