diff --git a/Dockerfile b/Dockerfile index 424990eb5..07a3c6067 100644 --- a/Dockerfile +++ b/Dockerfile @@ -44,4 +44,4 @@ EXPOSE 10001 # Table Storage Port EXPOSE 10002 -CMD ["azurite", "-l", "/data", "--blobHost", "0.0.0.0","--queueHost", "0.0.0.0", "--tableHost", "0.0.0.0"] +CMD ["azurite", "-l", "/data", "--blobHost", "0.0.0.0", "--queueHost", "0.0.0.0", "--tableHost", "0.0.0.0"] diff --git a/README.md b/README.md index 79088a980..482739932 100644 --- a/README.md +++ b/README.md @@ -217,11 +217,11 @@ Following extension configurations are supported: docker run -p 10000:10000 -p 10001:10001 -p 10002:10002 mcr.microsoft.com/azure-storage/azurite ``` -`-p 10000:10000` will expose blob service's default listening port. +`-p 10000:10000` will expose blob service's default listening port. The DFS (ADLS Gen2) service is also available on this port. `-p 10001:10001` will expose queue service's default listening port. `-p 10002:10002` will expose table service's default listening port. -Or just run blob service: +Or just run blob service (DFS is included automatically): ```bash docker run -p 10000:10000 mcr.microsoft.com/azure-storage/azurite azurite-blob --blobHost 0.0.0.0 @@ -331,7 +331,7 @@ You can customize the listening address per your requirements. ### Listening Port Configuration -Optional. By default, Azurite V3 will listen to 10000 as blob service port, and 10001 as queue service port, and 10002 as the table service port. +Optional. By default, Azurite V3 will listen to 10000 as blob service port (the DFS/ADLS Gen2 service is also served on this port), 10001 as queue service port, and 10002 as the table service port. You can customize the listening port per your requirements. > Warning: After using a customized port, you need to update connection string or configurations correspondingly in your Storage Tools or SDKs. diff --git a/docs/designs/ADLS-gen2-parity.md b/docs/designs/ADLS-gen2-parity.md new file mode 100644 index 000000000..01c18b7f0 --- /dev/null +++ b/docs/designs/ADLS-gen2-parity.md @@ -0,0 +1,190 @@ +# ADLS Gen2 Parity Implementation Plan + +## Context + +Azurite previously had a **thin DFS proxy layer** on a dedicated port (10004) that translated a small subset of ADLS Gen2 DFS REST API calls to Blob REST API calls via HTTP proxying (axios). This covered only filesystem (container) create/delete/HEAD and account listing. Full ADLS Gen2 parity requires native support for path (file/directory) operations, the append-then-flush write pattern, rename/move, ACLs, and list paths — none of which can be achieved by simple query-parameter rewriting. + +## Architectural Decision: Hybrid (Native DFS Handlers + Shared Port) + +Replace the HTTP proxy with a **native Express pipeline** mounted inside `BlobRequestListenerFactory` that directly accesses `IBlobMetadataStore` and `IExtentStore` — the same store instances used by the blob handlers. DFS and Blob share a single listener on port 10000; routing is done by URL prefix inside the existing server. + +``` +Port 10000 + ├─ /devstoreaccount1/?resource=filesystem → DFS Handlers → IBlobMetadataStore + IExtentStore + ├─ /devstoreaccount1// → DFS Handlers → same stores + └─ everything else → Blob Handlers → same stores +``` + +There is no separate DFS server or dedicated DFS port. `--dfsHost` / `--dfsPort` CLI flags and the `azurite.dfsHost` / `azurite.dfsPort` VS Code settings have been removed. + +**Why not keep proxying?** DFS operations like List Paths, Create Directory, Rename, ACLs, and append-then-flush have no single blob API equivalent. Proxying would require multi-call orchestration, lose atomicity, and add latency. + +**Why shared port instead of separate listener?** The DFS and Blob APIs share the same account/container/blob namespace. A separate listener would require passing live store references across server boundaries and duplicating TLS/auth/logging configuration. Mounting DFS routing inside the existing server is simpler and keeps all requests to a single endpoint — matching how Azure itself exposes both APIs on `*.blob.core.windows.net` / `*.dfs.core.windows.net` (separate hostnames but the same backing infrastructure). + +### Directory Model + +Directories stored as **zero-length BlockBlobs with `hdi_isfolder=true` metadata** — matching Azure's real internal behavior. No separate table needed. + +### ACL Storage + +New fields on `BlobModel`: `dfsAclOwner`, `dfsAclGroup`, `dfsAclPermissions`, `dfsAcl`. LokiJS is schemaless (just add fields); SQL needs ALTER TABLE. + +--- + +## Phase 0: Foundation — Shared Store Access & HNS Flag + +**Goal:** Wire DFS server to share stores with blob server; enable HNS mode. + +| File | Change | +|------|--------| +| `src/blob/utils/constants.ts` | Set `EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED = true` (or make configurable) | +| `src/blob/BlobServer.ts` | Expose `metadataStore`, `extentStore`, and `accountDataStore` via public getters | +| `src/blob/BlobRequestListenerFactory.ts` | Mount `DfsRequestListenerFactory` as a sub-router on DFS URL patterns | +| `src/blob/DfsRequestListenerFactory.ts` | Rewrite: replace axios proxy with native Express pipeline + DFS routing | +| `src/blob/IBlobEnvironment.ts`, `BlobEnvironment.ts`, `src/common/Environment.ts`, `VSCEnvironment.ts` | Add `--enableHierarchicalNamespace` option; remove `--dfsHost`/`--dfsPort` | + +**Deliverable:** DFS requests are served on the blob port; existing filesystem tests pass via direct store access. No separate DFS listener or port. + +--- + +## Phase 1: Path CRUD + List Paths + +**Goal:** Create/delete/read files and directories, list paths — the core operations most ADLS Gen2 SDKs depend on. + +### New files to create + +| File | Purpose | +|------|---------| +| `src/blob/dfs/DfsContext.ts` | DFS request context (account, filesystem, path) — analogous to `BlobStorageContext` | +| `src/blob/dfs/DfsOperation.ts` | Enum of DFS operations for dispatch | +| `src/blob/dfs/DfsDispatchMiddleware.ts` | Routes requests by `resource` param, `action` param, method, and headers | +| `src/blob/dfs/DfsErrorFactory.ts` | JSON error responses (`PathNotFound`, `DirectoryNotEmpty`, etc.) | +| `src/blob/dfs/DfsSerializer.ts` | JSON response serialization (DFS uses JSON, not XML) | +| `src/blob/dfs/handlers/FilesystemHandler.ts` | Filesystem ops → container store operations | +| `src/blob/dfs/handlers/PathHandler.ts` | Path create/delete/read/getProperties + listPaths | + +### Operations implemented + +- **Create Path** (`PUT ?resource=file|directory`): Creates zero-length BlockBlob; directories get `hdi_isfolder=true` metadata; auto-creates intermediate directories +- **Delete Path** (`DELETE`): Files → `deleteBlob()`; directories with `recursive=true` → delete all blobs with prefix; `recursive=false` → 409 if non-empty +- **Get Path Properties** (`HEAD`): Returns `x-ms-resource-type: file|directory` header +- **Read Path** (`GET`): Streams file content via `downloadBlob()` (follows `BlobHandler.download()` pattern) +- **List Paths** (`GET ?resource=filesystem&directory=...&recursive=true|false`): JSON response with `paths` array; uses `listBlobs()` with prefix/delimiter; supports continuation via `x-ms-continuation` + +### Existing files modified + +| File | Change | +|------|--------| +| `src/blob/persistence/IBlobMetadataStore.ts` | Add `dfsResourceType`, ACL fields to `BlobModel` / `IBlobAdditionalProperties` | +| `src/blob/persistence/LokiBlobMetadataStore.ts` | No schema changes needed (schemaless) | +| `src/blob/persistence/SqlBlobMetadataStore.ts` | Add columns: `dfsResourceType`, `dfsAclOwner`, `dfsAclGroup`, `dfsAclPermissions`, `dfsAcl` | + +### Tests + +Extend `tests/blob/dfsProxy.test.ts`: +- Create file / directory, verify as blob +- Delete file / empty dir / non-empty dir with recursive +- Get properties with `x-ms-resource-type` +- Read file content +- List paths recursive and non-recursive +- Cross-API: create via DFS → read via Blob API and vice versa + +--- + +## Phase 2: Append-Flush Write Pattern + +**Goal:** Implement the DFS file write model (create empty → append chunks → flush to commit). + +### Key insight + +DFS append-then-flush maps directly to existing **BlockBlob uncommitted blocks** infrastructure: each `action=append` becomes a `stageBlock()`, and `action=flush` becomes `commitBlockList()`. No new persistence methods needed. + +### Changes to `src/blob/dfs/handlers/PathHandler.ts` + +- **`updatePath_Append(position, body)`**: Write body to `IExtentStore` as extent chunk; record as uncommitted block via `metadataStore.stageBlock()`; validate `position` matches current append offset; return 202 +- **`updatePath_Flush(position, close)`**: Commit all staged blocks via `metadataStore.commitBlockList()`; update content length to `position`; return 200 with updated ETag + +### Tests + +- Create → append 3 chunks → flush → read back, verify content +- Append with wrong position → 400 +- Large file (multi-MB) append + +--- + +## Phase 3: Rename/Move Path + +**Goal:** Atomic rename for files and directories. + +### New persistence methods + +| Method | Description | +|--------|-------------| +| `IBlobMetadataStore.renameBlob(src, dest)` | Atomic rename of single blob (metadata-only, no extent copy) | +| `IBlobMetadataStore.renameBlobsByPrefix(srcPrefix, destPrefix)` | Atomic rename of all blobs matching prefix (for directory rename) | + +### PathHandler addition + +- **`renamePath(x-ms-rename-source)`**: Parse source header → for files: `renameBlob()`; for directories: `renameBlobsByPrefix()`. Supports cross-filesystem rename and conditional headers. + +### Persistence implementations + +- **LokiJS**: Update document `containerName` and `name` properties +- **SQL**: `UPDATE ... SET name = REPLACE(name, oldPrefix, newPrefix) WHERE name LIKE 'prefix%'` in transaction + +### Tests + +- Rename file within filesystem / across filesystems +- Rename directory (verify children moved) +- Rename non-existent → 404 +- Rename with conditional headers + +--- + +## Phase 4: ACL Operations + +**Goal:** POSIX ACL get/set for emulator parity. + +### PathHandler additions + +- **`getAccessControl()`**: Read ACL fields from blob record → return as `x-ms-owner`, `x-ms-group`, `x-ms-permissions`, `x-ms-acl` headers. Defaults: `$superuser`/`$superuser`/`rwxr-x---` +- **`setAccessControl(owner, group, permissions, acl)`**: Validate ACL format → update blob record +- **`setAccessControlRecursive(mode, acl)`**: `mode` = set|modify|remove; iterate blobs under prefix; support continuation; return JSON with `directoriesSuccessful`, `filesSuccessful`, `failureCount` + +### Tests + +- Set/get ACL on file and directory +- Recursive ACL set on directory tree +- Default ACL values on new paths + +--- + +## Phase 5: Polish & Remaining Operations + +- **Set Filesystem Properties** (`PATCH ?resource=filesystem`) → `setContainerMetadata()` +- **`x-ms-properties` encoding/decoding** — new `src/blob/dfs/DfsPropertyEncoding.ts` utility (base64 key=value pairs) +- **DFS JSON error format**: `{"error":{"code":"...","message":"..."}}` +- **Lease support** on DFS paths (reuse blob lease infrastructure) +- **SAS validation** on DFS endpoints (reuse existing authenticators) +- **Content-MD5/CRC64 validation** on append + +--- + +## Verification Plan + +1. **Unit tests**: Extend `tests/blob/dfsProxy.test.ts` per phase +2. **Cross-API tests**: Verify DFS-created data is visible via Blob API and vice versa +3. **SDK integration**: Test with `@azure/storage-file-datalake` Node.js SDK against the emulator +4. **Manual smoke test**: Run Azurite, use Azure Storage Explorer with DFS endpoint +5. **Existing blob tests**: Ensure `npm test` still passes (no regression) + +--- + +## Critical Reference Files + +- `src/blob/handlers/ContainerHandler.ts` — pattern for handler ↔ store interaction +- `src/blob/handlers/BlockBlobHandler.ts` — `stageBlock`/`commitBlockList` for append-flush reuse +- `src/blob/handlers/BlobHandler.ts` — `download()` pattern for Read Path +- `src/blob/persistence/IBlobMetadataStore.ts` — store interface to extend +- `src/blob/generated/handlers/` — handler interface patterns +- `src/blob/middlewares/blobStorageContext.middleware.ts` — context extraction pattern for DfsContext diff --git a/docs/designs/ADLS-gen2-review.md b/docs/designs/ADLS-gen2-review.md new file mode 100644 index 000000000..411e5642c --- /dev/null +++ b/docs/designs/ADLS-gen2-review.md @@ -0,0 +1,491 @@ +# ADLS Gen2 PR — Code Review + +Internal review of branch `jsavard/adls-gen2`. Issues ordered by severity. + +Legend: ✅ Fixed | 🔲 Pending + +--- + +## Pass 1 — All Fixed (commit c2f6204) + +| ID | Summary | Status | +|----|---------|--------| +| C-1 | `flushData` loses data on second flush cycle | ✅ | +| C-2 | HNS hierarchy rows leaked on container delete | ✅ | +| C-3 | `FilesystemHandler.getProperties` returns wrong HNS flag | ✅ | +| C-4 | `FilesystemHandler.getProperties` leaks `azurite_hns_enabled` in `x-ms-properties` | ✅ | +| C-5 | `PathHandler.create` missing ACL enforcement | ✅ | +| C-6 | `checkApiVersion` synchronous throw in DFS context middleware | ✅ | +| M-1 | Rename silently overwrites destination / no non-empty-dir guard | ✅ | +| M-2 | `setProperties` allows overwriting internal metadata keys | ✅ | +| M-3 | `safeGetBlobProperties` swallows all errors, not just 404 | ✅ | +| M-4 | `getAccountInfo` — unhandled 404 from `getContainerProperties` | ✅ | +| M-5 | SQL bulk rename — constraint violation surfaced as 500 | ✅ | +| M-6 | Batch delete — no 404 tolerance for concurrent deletes | ✅ | +| M-7 | LIKE patterns — user-controlled paths not escaping `%` and `_` | ✅ | +| M-8 | Custom ETags don't match Azure `"0x..."` format | ✅ | +| m-1 | Dead code: `renameBlob`, `renameBlobsByPrefix`, `renameHnsPaths`, etc. | ✅ | +| m-2 | `failureCount` always 0 in `setAccessControlRecursive` | ✅ | +| m-3 | `FilesystemHandler.setProperties` allows overwriting `azurite_hns_enabled` | ✅ | +| m-4 | Stream error in `read` after headers sent | ✅ | +| m-5 | `maxResults`/`maxRecords` not validated for NaN/negative | ✅ | +| m-6 | `ensureIntermediateDirectories` called after `renamePathAtomic` | ✅ | +| m-7 | User-agent sniffing — documented limitation | ✅ | +| m-8 | Named group ACL ignored — documented limitation | ✅ | + +--- + +## Pass 2 — Current findings (commit 86c3eba baseline) + +--- + +## Critical + +### [C-1] `flushData` loses data on second flush cycle +**File:** `src/blob/dfs/handlers/PathHandler.ts` ~line 595 + +`commitBlockList` is built only from the current batch of uncommitted blocks, not the previously-committed ones. After `append→flush→append→flush`, the second flush wipes out the first flush's data. + +**Fix:** Prepend `blob.committedBlocksInOrder` (as `Committed` entries) to the commit list before calling `commitBlockList`: +```ts +const previouslyCommitted = (blob.committedBlocksInOrder || []).map(b => ({ + blockName: b.name, + blockCommitType: "Committed" +})); +const commitList = [ + ...previouslyCommitted, + ...sortedBlocks.map(b => ({ blockName: b.name, blockCommitType: "Uncommitted" })) +]; +``` + +**Test gap:** No test covers two complete `append→flush` cycles on the same file. + +--- + +### [C-2] HNS hierarchy rows leaked on container/filesystem delete +**Files:** `src/blob/persistence/SqlBlobMetadataStore.ts`, `src/blob/persistence/LokiBlobMetadataStore.ts` — `deleteContainer` + +Both stores clean up blobs and blocks but never delete the matching `HnsHierarchy` rows. Re-creating a container with the same name inherits stale hierarchy entries. + +**Fix:** Add a `HnsHierarchy` delete-by-container step inside `deleteContainer` (within the existing transaction for SQL, immediately after blob removal for Loki). + +--- + +### [C-3] `FilesystemHandler.getProperties` returns server-wide HNS flag, ignores per-container value +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` line 108 + +Returns `String(this.enableHierarchicalNamespace)` instead of reading `container.metadata["azurite_hns_enabled"]`. + +**Fix:** +```ts +const hns = result.metadata?.["azurite_hns_enabled"] === "true" || + (result.metadata?.["azurite_hns_enabled"] === undefined && this.enableHierarchicalNamespace); +res.setHeader("x-ms-namespace-enabled", String(hns)); +``` + +--- + +### [C-4] `FilesystemHandler.getProperties` leaks `azurite_hns_enabled` in `x-ms-properties` +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` lines 110-117 + +No `internalKeys` filter unlike `PathHandler.getProperties`. Clients receive and may round-trip the reserved key, corrupting the HNS flag. + +**Fix:** Filter `azurite_hns_enabled` before building the `x-ms-properties` header: +```ts +const internalKeys = new Set(["azurite_hns_enabled"]); +const properties = Object.entries(result.metadata) + .filter(([key]) => !internalKeys.has(key)) + .map(([key, value]) => `${key}=${Buffer.from(value).toString("base64")}`) + .join(","); +``` + +--- + +### [C-5] `PathHandler.create` has no ACL enforcement +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 37-125 + +Every other operation (`delete`, `getProperties`, `read`, `listPaths`, `update`, `rename`) enforces ACL, but `create` does not. In `--oauth acl` mode, any authenticated caller can create files or directories anywhere. + +**Fix:** Enforce write on the parent directory at the start of the non-rename path: +```ts +const parentPath = pathName.includes("/") + ? pathName.substring(0, pathName.lastIndexOf("/")) + : ""; +if (!(await this.enforceAcl(ctx, res, account, filesystem, parentPath, "w"))) return; +``` + +--- + +### [C-6] `checkApiVersion` in DFS context middleware throws synchronously — crash risk +**File:** `src/blob/dfs/DfsContext.ts` lines 52-56 + +`checkApiVersion` can throw a `StorageError` synchronously inside a non-async `RequestHandler`. Express does not forward synchronous throws to the error handler, crashing the request. + +**Fix:** +```ts +try { + checkApiVersion(apiVersion, ValidAPIVersions, requestId); +} catch (error) { + next(error); + return; +} +``` + +--- + +## Major + +### [M-1] `renamePath` silently overwrites or corrupts the destination if it already exists +**File:** `src/blob/dfs/handlers/PathHandler.ts` ~line 1005; `SqlBlobMetadataStore.ts` `renamePathAtomic` + +No existence check before `renamePathAtomic`. The SQL path throws a unique-constraint violation returned as 500; Loki leaves a duplicate row. + +**Fix:** Check destination existence and either delete it (overwrite semantics) or return `PathAlreadyExists`, within the same transaction. + +**Test gap:** No test exercises rename to an already-existing destination. + +--- + +### [M-2] `setProperties` allows overwriting internal metadata keys +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 783-822 + +User-controlled `x-ms-properties` pairs are merged into metadata without filtering. A client can overwrite `hdi_isfolder` (converting file↔directory) or ACL fields (`dfsAclOwner`, `dfsAclGroup`, etc.). + +**Fix:** Block reserved keys during merge: +```ts +const reservedKeys = new Set(["hdi_isfolder", "dfsAclOwner", "dfsAclGroup", "dfsAclPermissions", "dfsAcl"]); +if (!reservedKeys.has(key)) { + metadata[key] = value; +} +``` + +--- + +### [M-3] `safeGetBlobProperties` swallows all errors, not just 404 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 1202-1215 + +Any DB error is treated as "path not found". In `enforceAcl`, this means "allow" — store errors silently bypass ACL enforcement. + +**Fix:** Only swallow `statusCode === 404`: +```ts +} catch (error: any) { + if (error.statusCode === 404) return undefined; + throw error; +} +``` + +--- + +### [M-4] `BlobHandler`/`ContainerHandler.getAccountInfo` can throw unhandled 404 +**Files:** `src/blob/handlers/BlobHandler.ts` ~line 964; `src/blob/handlers/ContainerHandler.ts` ~line 848 + +`getContainerProperties` is called without try/catch. If the container was deleted between routing and handling, an unhandled 404 propagates. + +**Fix:** Wrap in try/catch and fall back to the server-wide HNS default on 404. + +--- + +### [M-5] SQL bulk rename may throw constraint violation as 500 on destination conflict +**File:** `src/blob/persistence/SqlBlobMetadataStore.ts` `renamePathAtomic` + +No pre-check for duplicate destination names. Sequelize unique-constraint errors surface as generic 500. + +**Fix:** Catch Sequelize constraint errors and map to `PathAlreadyExists`. + +--- + +### [M-6] `Promise.all` batch delete has no 404 tolerance for concurrent deletes +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 163-170 + +If a child is deleted concurrently between `listBlobs` and the batch delete, the whole recursive delete fails. + +**Fix:** Swallow 404 per-item: +```ts +.map(child => + this.metadataStore.deleteBlob(...).catch((e: any) => { + if (e.statusCode !== 404) throw e; + }) +) +``` + +--- + +### [M-7] `LIKE` patterns use user-controlled paths without escaping `%` and `_` +**File:** `src/blob/persistence/SqlBlobMetadataStore.ts` lines 3707, 3756, 3828, 3882, 3922 + +A path containing `%` or `_` (SQL LIKE wildcards) would match unintended rows in rename, delete, and list queries. + +**Fix:** Escape wildcards before building `LIKE` patterns: +```ts +const escapedPrefix = sourcePrefix.replace(/%/g, '\\%').replace(/_/g, '\\_'); +``` + +--- + +### [M-8] ETags generated in PathHandler don't match Azure's `"0x..."` format +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 80, 601, 1099 + +Uses `` `"${new Date().getTime().toString(16)}"` `` producing short lowercase hex. SDK conditional-request validation may reject these. + +**Fix:** Use `newEtag()` from `src/common/utils/utils.ts` at all three sites. + +--- + +## Minor + +### [m-1] Dead code: `isHnsDirectoryEmpty`, `hnsPathExists`, `renameBlob`, `renameBlobsByPrefix` +Declared in `IBlobMetadataStore`, implemented in both stores, never called from any handler. Remove or annotate with `// TODO`. + +### [m-2] `failureCount` in `setAccessControlRecursive` always reports 0 +**File:** `src/blob/dfs/handlers/PathHandler.ts` line 700 — declared as `const`, never incremented inside the per-path catch. +**Fix:** Change to `let failureCount = 0` and increment on error. + +### [m-3] `FilesystemHandler.setProperties` allows overwriting `azurite_hns_enabled` +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` lines 178-190 — same issue as M-2 but for filesystem metadata. Filter the reserved key. + +### [m-4] Stream errors in `read` after headers are sent +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 291-344 — `sendDfsError` called on an already-started response produces "Cannot set headers after they are sent". +**Fix:** Check `res.headersSent` and call `res.destroy(error)` instead. + +### [m-5] `maxResults`/`maxRecords` not validated for NaN or negative values +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 354-356, 685-687 — `parseInt("garbage", 10)` returns `NaN`. +**Fix:** `Math.max(1, Math.min(5000, parseInt(..., 10) || 5000))`. + +### [m-6] `ensureIntermediateDirectories` called after `renamePathAtomic` commits +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 1045-1060 — if intermediate-dir creation fails after a successful rename, the blob exists at the new path but the hierarchy is inconsistent. +**Fix:** Call `ensureIntermediateDirectories` before `renamePathAtomic`. + +### [m-7] User-agent sniffing for DFS routing is fragile +**File:** `src/blob/BlobRequestListenerFactory.ts` lines 88-94 — any client whose UA contains "datalake" gets DFS routing regardless of intent. + +### [m-8] Named group ACL entries silently ignored +**File:** `src/blob/dfs/DfsAclEnforcer.ts` lines 178-196 — `group::rwx` entries are parsed but never evaluated. Should at minimum be documented. + +--- + +## Pass 1 — Test Gaps (all fixed in commit 86c3eba) + +| # | Scenario | Status | +|---|----------|--------| +| 1 | Multi-cycle `append→flush→append→flush` | ✅ | +| 2 | Rename to existing destination | ✅ | +| 3 | ETag format validation (`"0x..."` pattern) | ✅ | +| 4 | `setProperties` with reserved key names | ✅ | +| 5 | Container/filesystem delete cleans up HNS hierarchy | ✅ (code only) | +| 6 | ACL enforcement blocks `create` when lacking parent write | ✅ | +| 7 | Non-numeric `?position=garbage` | ✅ | +| 8 | Path names containing `%` or `_` in SQL rename/delete | ✅ (code only) | + +--- + +## Pass 2 — New Issues (baseline: commit 86c3eba) + +### Critical + +#### [P2-C-1] `FilesystemHandler.setProperties` wipes `azurite_hns_enabled` on every PATCH 🔲 +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` lines 174–222 +**Problem:** `setProperties` builds metadata only from the request; never reads existing container metadata first. `setContainerMetadata` does a full replacement, so `azurite_hns_enabled` is erased on every `PATCH ?resource=filesystem`. Subsequent `getProperties` falls back to the server-wide flag. +**Fix:** Read existing metadata with `getContainerProperties`, preserve `azurite_hns_enabled`, then overlay client-supplied properties before calling `setContainerMetadata`. + +#### [P2-C-2] `ContainerHandler.setMetadata` (Blob API) also wipes `azurite_hns_enabled` 🔲 +**File:** `src/blob/handlers/ContainerHandler.ts` lines 202–230 +**Problem:** `PUT ?comp=metadata` replaces the entire metadata map; `azurite_hns_enabled` is not preserved. A Blob SDK `SetContainerMetadata` call after creating an HNS container silently disables HNS. +**Fix:** Same as P2-C-1 — read existing metadata and preserve the reserved key. + +#### [P2-C-3] `azurite_hns_enabled` leaks as user-visible metadata via Blob API 🔲 +**File:** `src/blob/handlers/ContainerHandler.ts` `getContainerProperties` ~line 133 +**Problem:** `GetContainerProperties` returns metadata unfiltered. SDK clients receive `x-ms-meta-azurite_hns_enabled` as a user metadata header, polluting the metadata map and enabling round-trip corruption. +**Fix:** Filter `azurite_hns_enabled` from metadata in `getContainerProperties` response, mirroring `FilesystemHandler.getProperties`. + +#### [P2-C-4] `x-ms-meta-azurite_hns_enabled` header lets clients forge the HNS flag 🔲 +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` `extractMetadata` ~line 224 +**Problem:** `extractMetadata` reads all `x-ms-meta-*` headers verbatim, including `x-ms-meta-azurite_hns_enabled`. A client can send this header to disable HNS on any writable container. The `x-ms-properties` path already filters this key; the `x-ms-meta-*` path does not. +**Fix:** In `extractMetadata`, skip the `azurite_hns_enabled` key. + +--- + +### Major + +#### [P2-M-1] `listPaths` returns `200 {paths:[]}` instead of `404` for non-existent directory 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 361–430 +**Problem:** When `?directory=nonexistent` is set and no blobs match, the response is `200 { paths: [] }`. Azure returns `404 PathNotFound`. DataLake SDK `listPaths` relies on 404 to detect missing directories. +**Fix:** After `listBlobs`, if results are empty and `directory` was specified, check if the directory blob exists; if not, return `sendDfsError(res, pathNotFound(directory))`. + +#### [P2-M-2] `PathHandler.delete` does not handle 412 conditional header mismatch 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 204–211 +**Problem:** The `catch` block only handles 404. A `deleteBlob` 412 (e.g., `If-Match` header mismatch) is logged and returned as 500 `InternalError`. `getProperties` handles 412 correctly. +**Fix:** Add a 412 handler in the catch block, same pattern as `getProperties`. + +#### [P2-M-3] `DfsContext` 400 response missing `x-ms-error-code` header 🔲 +**File:** `src/blob/dfs/DfsContext.ts` lines 101–104 +**Problem:** Missing account name sends `res.status(400).json(...)` directly, bypassing `sendDfsError`. Azure SDKs require the `x-ms-error-code` header for structured error parsing. +**Fix:** Replace with `sendDfsError(res, { statusCode: 400, code: "InvalidQueryParameterValue", message: "Account name is required." }); return;` + +#### [P2-M-4] Multi-block read stream not destroyed on error — resource leak 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 319–329 +**Problem:** `stream.on("error", reject)` does not call `stream.destroy()`. The stream continues emitting after the Promise rejects, potentially writing to a closed response. +**Fix:** `stream.on("error", (err) => { stream.destroy(); reject(err); });` + +#### [P2-M-5] `x-ms-lease-break-period` NaN propagated to `breakBlobLease` 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` `breakLease` ~line 953 +**Problem:** `parseInt(header, 10)` returns `NaN` for non-numeric values and is passed directly to `breakBlobLease`, producing undefined behavior instead of `400 InvalidHeaderValue`. +**Fix:** Validate the parsed value; return 400 if NaN. + +#### [P2-M-6] Concurrent appends at same position cause silent data loss 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 475–543 +**Problem:** Position check + `stageBlock` are not atomic. Two concurrent appends at `position=0` both pass the check, generate the same block ID, and the second overwrites the first. The first append's extent is leaked. +**Fix:** Document as known limitation, or make position check + block stage a single atomic metadata operation. + +#### [P2-M-7] `listPaths` returns hardcoded owner/group/permissions, ignoring stored ACL 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 394–413 +**Problem:** Every entry in `listPaths` response has `owner: "$superuser"`, `group: "$superuser"`, `permissions: "rwxr-x---"` regardless of stored ACL. ACL-aware applications reading from `listPaths` always see wrong data. +**Fix:** Include `blob.metadata?.dfsAclOwner || "$superuser"` etc. per entry, or document as a known limitation. + +--- + +### Minor + +#### [P2-m-1] `dynamic require("crypto")` inside hot path 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` `appendData` ~line 499 +**Fix:** Move to top-level `import { createHash } from "crypto"`. + +#### [P2-m-2] `parentPath` variable shadowed inside `try` block 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 52 and 104 +**Problem:** Outer `parentPath` (ACL check, empty-string for root) and inner `parentPath` (HNS registration, `null` for root) have the same name but different semantics. +**Fix:** Rename the inner variable to `hnsParentPath`. + +#### [P2-m-3] `FilesystemHandler.setProperties` does not preserve existing user metadata 🔲 +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` lines 182–207 +**Problem:** Beyond the HNS flag (P2-C-1), user metadata set via `x-ms-meta-*` on prior requests is also overwritten on every `PATCH ?resource=filesystem`. +**Fix:** Read existing metadata and merge, overwriting only the keys from `x-ms-properties`. + +#### [P2-m-4] Dispatch mis-routes `?resource=filesystem` + non-empty path 🔲 +**File:** `src/blob/DfsRequestListenerFactory.ts` lines 87–90 +**Problem:** When `resource=filesystem` AND `ctx.path` is set, any HTTP method is mapped to `Filesystem_ListPaths`. A `PUT` with both conditions would silently become a list operation. +**Fix:** Add a method check (`&& method === "GET"`) or return 400 for the combination. + +#### [P2-m-5] `ensureIntermediateDirectories` accepts a file as a path component 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` `ensureIntermediateDirectories` ~line 1128 +**Problem:** If `a/b` already exists as a file, the loop skips creating the directory entry and the create of `a/b/c` proceeds. Azure returns an error in this case. +**Fix:** After `safeGetBlobProperties`, check if the existing entry is actually a directory; if not, return an appropriate error. + +--- + +## Pass 2 — Test Gaps (fixed in commit aefab39) + +| # | Scenario | Status | +|---|----------|--------| +| 1 | `setProperties` PATCH then verify `x-ms-namespace-enabled` still correct | ✅ | +| 2 | Blob API `SetContainerMetadata` then verify DFS still works | ✅ (code only) | +| 3 | `GetContainerProperties` via Blob API does not expose `azurite_hns_enabled` | ✅ (code only) | +| 4 | `listPaths ?directory=nonexistent` returns 404 | ✅ | +| 5 | `delete` with non-matching `If-Match` returns 412 | ✅ | +| 6 | `listPaths` returns correct `owner`/`group`/`permissions` after `setAccessControl` | ✅ | + +## Pass 2 — All Fixed (commit aefab39) + +| ID | Summary | Status | +|----|---------|--------| +| P2-C-1 | `FilesystemHandler.setProperties` wipes `azurite_hns_enabled` | ✅ | +| P2-C-2 | `ContainerHandler.setMetadata` wipes `azurite_hns_enabled` | ✅ | +| P2-C-3 | `azurite_hns_enabled` leaks via Blob API `getContainerProperties` | ✅ | +| P2-C-4 | `x-ms-meta-azurite_hns_enabled` allows HNS flag forgery | ✅ | +| P2-M-1 | `listPaths` returns 200 empty instead of 404 for non-existent dir | ✅ | +| P2-M-2 | `PathHandler.delete` missing 412 handler | ✅ | +| P2-M-3 | `DfsContext` 400 missing `x-ms-error-code` header | ✅ | +| P2-M-4 | Multi-block read stream not destroyed on error | ✅ | +| P2-M-5 | `x-ms-lease-break-period` NaN propagated | ✅ | +| P2-M-6 | Concurrent appends TOCTOU (documented) | ✅ | +| P2-M-7 | `listPaths` hardcoded ACL values | ✅ | +| P2-m-1 | `dynamic require("crypto")` | ✅ | +| P2-m-2 | `parentPath` shadowed | ✅ | +| P2-m-3 | `FilesystemHandler.setProperties` doesn't preserve user metadata | ✅ | +| P2-m-4 | Dispatch mis-routes `resource=filesystem` + path | ✅ | +| P2-m-5 | `ensureIntermediateDirectories` accepts file as path component | ✅ | + +--- + +## Pass 3 — New Issues (baseline: commit aefab39) + +### Major + +#### [P3-M-1] Recursive delete bypasses lease checks on child blobs 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 171–178 +**Problem:** `deleteBlob(..., child.name, {})` passes empty options — no lease conditions. Leased children are force-deleted. Azure rejects recursive delete if any child holds a lease. +**Fix:** Pass the request's `leaseAccessConditions` to each child `deleteBlob` call, or pre-check for leases and return 409 before starting the batch. + +#### [P3-M-2] `listPaths` prefix entries missing `eTag` field 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 418–426 +**Problem:** File entries include `eTag: blob.properties.etag`, but prefix (subdirectory) entries in non-recursive listing have no `eTag`. SDK clients using list results for conditional operations get `undefined`. +**Fix:** Add `eTag: dirProps?.properties.etag` to the prefix directory object. + +#### [P3-M-3] `setAccessControlRecursive` re-applies ACL to root on every continuation page 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 732–784 +**Problem:** `pathName` is always prepended to `allPaths` regardless of whether a continuation token is in use. On page 2+, the root is processed again, inflating `directoriesSuccessful` by 1 per page. +**Fix:** Only include `pathName` on the first page: `const allPaths = continuation ? blobs.map(b => b.name) : [pathName, ...blobs.map(b => b.name)];` + +#### [P3-M-4] `setAccessControlRecursive` silently accepts invalid `mode`, returns 200 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 715, 745–769 +**Problem:** Any value other than `"set"`, `"modify"`, `"remove"` silently no-ops and returns 200. Azure returns 400 for an invalid mode. +**Fix:** Validate at the top: `if (!["set","modify","remove"].includes(mode)) return sendDfsError(res, { statusCode: 400, code: "InvalidQueryParameterValue", message: "Invalid mode." });` + +#### [P3-M-5] `read` handler returns 200 for directory paths instead of 400 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 288–363 +**Problem:** `GET` on a directory blob returns 200 with empty body. Azure returns 400 `PathIsDirectory`. +**Fix:** After `downloadBlob`, check `blob.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true"` and return `sendDfsError(res, { statusCode: 400, code: "PathIsDirectory", message: "The path is a directory." })`. + +#### [P3-M-6] `renamePathAtomic` leaves orphaned uncommitted blocks after rename mid-append 🔲 +**File:** `src/blob/persistence/LokiBlobMetadataStore.ts` ~line 3579; `src/blob/persistence/SqlBlobMetadataStore.ts` ~line 3699 +**Problem:** Neither implementation updates the blocks collection when renaming. Uncommitted blocks staged under the old path become orphaned; a subsequent flush finds no blocks and silently no-ops. +**Fix:** In SQL, add `BlocksModel.update({ blobName: destPath }, { where: { accountName, containerName, blobName: sourcePath } })` inside the transaction. Mirror in Loki with a `findAndUpdate` on the blocks collection. + +--- + +### Minor + +#### [P3-m-1] Rename destination-overwrite TOCTOU (delete then rename non-atomic) 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 1097–1102 +**Problem:** `deleteBlob(destPath)` then `renamePathAtomic(src→dest)` are two separate operations. A concurrent create at `destPath` between them causes a constraint violation → `BlobAlreadyExists` error with the destination already gone. +**Fix:** Document as known limitation, or move the destination delete inside `renamePathAtomic`'s SQL transaction. + +#### [P3-m-2] `invalidFlushPosition()` in `DfsErrorFactory` has wrong status code (400) 🔲 +**File:** `src/blob/dfs/DfsErrorFactory.ts` ~line 66 +**Problem:** Returns `statusCode: 400` but flush position mismatch is correctly `409` (matching test + Azure spec). Factory is dead code (never called). +**Fix:** Delete the function, or fix to 409 and use it in `flushData`. + +#### [P3-m-3] `read` handler double `res.end()` in single-block pipe path 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 335–340 +**Problem:** Manual `stream.on("end", () => res.end())` and `stream.pipe(res)` (auto-end) both call `res.end()`. +**Fix:** `stream.pipe(res, { end: false })` to let the manual handler own the end. + +#### [P3-m-4] `FilesystemHandler.list` passes `NaN` to `listContainers` on bad `maxResults` 🔲 +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` lines 142–144 +**Problem:** `parseInt("abc", 10)` returns `NaN`; no fallback. Contrast with `listPaths` which uses `|| 5000`. +**Fix:** `Math.max(1, Math.min(5000, parseInt(..., 10) || 5000))`. + +#### [P3-m-5] `DfsPropertyEncoding.ts` is dead code — never imported 🔲 +**File:** `src/blob/dfs/DfsPropertyEncoding.ts` +**Problem:** `encodeProperties()` and `decodeProperties()` are exported but never used. Handlers inline equivalent logic. +**Fix:** Delete the file, or import and use it in `PathHandler` and `FilesystemHandler`. + +#### [P3-m-6] `FilesystemHandler.create` uses non-standard ETag format (no `0x` prefix) 🔲 +**File:** `src/blob/dfs/handlers/FilesystemHandler.ts` line 23 +**Problem:** `` `"${now.getTime().toString(16)}"` `` produces `"187a3d2f8c0"` — no `0x` prefix, no random uniqueness factor. `setProperties` already uses `newEtag()`. +**Fix:** Replace with `newEtag()` (already imported). + +#### [P3-m-7] `renamePath` does not reject `..` segments in rename source path 🔲 +**File:** `src/blob/dfs/handlers/PathHandler.ts` lines 1053–1054 +**Problem:** `filter(p => p)` removes empty strings but not `".."`, allowing path traversal-style rename sources. +**Fix:** `if (sourceParts.some(p => p === "..")) return sendDfsError(res, invalidSourceOrDestination("Path must not contain '..' segments."));` + +--- + +## Pass 3 — Test Gaps + +| # | Scenario | Related issue | +|---|----------|---------------| +| 1 | `GET` on a directory path returns 400 `PathIsDirectory` | P3-M-5 | +| 2 | `setAccessControlRecursive` with invalid `mode` returns 400 | P3-M-4 | +| 3 | `setAccessControlRecursive` with continuation — root counted exactly once | P3-M-3 | +| 4 | `listPaths` non-recursive — subdirectory entries have `eTag` | P3-M-2 | +| 5 | `FilesystemHandler.list` with `maxResults=abc` does not crash | P3-m-4 | diff --git a/package-lock.json b/package-lock.json index c79dba128..55b5266a2 100644 --- a/package-lock.json +++ b/package-lock.json @@ -44,6 +44,7 @@ "@azure/core-rest-pipeline": "^1.2.0", "@azure/data-tables": "^13.0.1", "@azure/storage-blob": "^12.9.0", + "@azure/storage-file-datalake": "^12.29.0", "@azure/storage-queue": "^12.8.0", "@types/args": "^5.0.0", "@types/async": "^3.0.1", @@ -143,20 +144,77 @@ } }, "node_modules/@azure/core-client": { - "version": "1.5.0", - "resolved": "https://registry.npmjs.org/@azure/core-client/-/core-client-1.5.0.tgz", - "integrity": "sha512-YNk8i9LT6YcFdFO+RRU0E4Ef+A8Y5lhXo6lz61rwbG8Uo7kSqh0YqK04OexiilM43xd6n3Y9yBhLnb1NFNI9dA==", + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-client/-/core-client-1.10.1.tgz", + "integrity": "sha512-Nh5PhEOeY6PrnxNPsEHRr9eimxLwgLlpmguQaHKBinFYA/RU9+kOYVOQqOrTsCL+KSxrLLl1gD8Dk5BFW/7l/w==", + "license": "MIT", "dependencies": { - "@azure/abort-controller": "^1.0.0", - "@azure/core-asynciterator-polyfill": "^1.0.0", - "@azure/core-auth": "^1.3.0", - "@azure/core-rest-pipeline": "^1.5.0", - "@azure/core-tracing": "1.0.0-preview.13", - "@azure/logger": "^1.0.0", - "tslib": "^2.2.0" + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "tslib": "^2.6.2" }, "engines": { - "node": ">=12.0.0" + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-client/node_modules/@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/@azure/core-client/node_modules/@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-client/node_modules/@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/core-client/node_modules/@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" } }, "node_modules/@azure/core-http": { @@ -245,11 +303,15 @@ } }, "node_modules/@azure/core-paging": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/@azure/core-paging/-/core-paging-1.1.1.tgz", - "integrity": "sha512-hqEJBEGKan4YdOaL9ZG/GRG6PXaFd/Wb3SSjQW4LWotZzgl6xqG00h6wmkrpd2NNkbBkD1erLHBO3lPHApv+iQ==", + "version": "1.6.2", + "resolved": "https://registry.npmjs.org/@azure/core-paging/-/core-paging-1.6.2.tgz", + "integrity": "sha512-YKWi9YuCU04B55h25cnOYZHxXYtEvQEbKST5vqRga7hWY9ydd3FZHdeQF8pyh+acWZvppw13M/LMGx0LABUVMA==", + "license": "MIT", "dependencies": { - "@azure/core-asynciterator-polyfill": "^1.0.0" + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=18.0.0" } }, "node_modules/@azure/core-rest-pipeline": { @@ -332,40 +394,43 @@ } }, "node_modules/@azure/core-util": { - "version": "1.11.0", - "resolved": "https://registry.npmjs.org/@azure/core-util/-/core-util-1.11.0.tgz", - "integrity": "sha512-DxOSLua+NdpWoSqULhjDyAZTXFdP/LKkqtYuxxz1SCN289zk3OG8UOpnCQAz/tygyACBtWp/BoO72ptK7msY8g==", + "version": "1.13.1", + "resolved": "https://registry.npmjs.org/@azure/core-util/-/core-util-1.13.1.tgz", + "integrity": "sha512-XPArKLzsvl0Hf0CaGyKHUyVgF7oDnhKoP85Xv6M4StF/1AhfORhZudHtOyf2s+FcbuQ9dPRAjB8J2KvRRMUK2A==", "license": "MIT", "dependencies": { - "@azure/abort-controller": "^2.0.0", + "@azure/abort-controller": "^2.1.2", + "@typespec/ts-http-runtime": "^0.3.0", "tslib": "^2.6.2" }, "engines": { - "node": ">=18.0.0" + "node": ">=20.0.0" } }, "node_modules/@azure/core-util/node_modules/@azure/abort-controller": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.0.0.tgz", - "integrity": "sha512-RP/mR/WJchR+g+nQFJGOec+nzeN/VvjlwbinccoqfhTsTHbb8X5+mLDp48kHT0ueyum0BNSwGm0kX0UZuIqTGg==", + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "license": "MIT", "dependencies": { - "tslib": "^2.2.0" + "tslib": "^2.6.2" }, "engines": { "node": ">=18.0.0" } }, "node_modules/@azure/core-xml": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/@azure/core-xml/-/core-xml-1.2.0.tgz", - "integrity": "sha512-oWWQUWfllD3RO8Ixnsw5RjAUWPitjRI+LXSM0KFmgkSjl0R6RTQzXU2SEMsgAENkD5nzyI4yPpTRJcN2svM6ug==", + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/@azure/core-xml/-/core-xml-1.5.0.tgz", + "integrity": "sha512-D/sdlJBMJfx7gqoj66PKVmhDDaU6TKA49ptcolxdas29X7AfvLTmfAGLjAcIMBK7UZ2o4lygHIqVckOlQU3xWw==", "dev": true, + "license": "MIT", "dependencies": { - "fast-xml-parser": "^4.0.1", - "tslib": "^2.2.0" + "fast-xml-parser": "^5.0.7", + "tslib": "^2.8.1" }, "engines": { - "node": ">=12.0.0" + "node": ">=20.0.0" } }, "node_modules/@azure/data-tables": { @@ -481,18 +546,18 @@ } }, "node_modules/@azure/logger": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/@azure/logger/-/logger-1.0.0.tgz", - "integrity": "sha512-g2qLDgvmhyIxR3JVS8N67CyIOeFRKQlX/llxYJQr1OSGQqM3HTpVP8MjmjcEKbL/OIt2N9C9UFaNQuKOw1laOA==", + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/@azure/logger/-/logger-1.3.0.tgz", + "integrity": "sha512-fCqPIfOcLE+CGqGPd66c8bZpwAji98tZ4JI9i/mlTNTlsIWslCfpg48s/ypyLxZTump5sypjrKn2/kY7q8oAbA==", + "license": "MIT", "dependencies": { - "tslib": "^1.9.3" + "@typespec/ts-http-runtime": "^0.3.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" } }, - "node_modules/@azure/logger/node_modules/tslib": { - "version": "1.14.1", - "resolved": "https://registry.npmjs.org/tslib/-/tslib-1.14.1.tgz", - "integrity": "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==" - }, "node_modules/@azure/ms-rest-js": { "version": "1.11.2", "resolved": "https://registry.npmjs.org/@azure/ms-rest-js/-/ms-rest-js-1.11.2.tgz", @@ -601,83 +666,307 @@ } }, "node_modules/@azure/storage-blob": { - "version": "12.16.0", - "resolved": "https://registry.npmjs.org/@azure/storage-blob/-/storage-blob-12.16.0.tgz", - "integrity": "sha512-jz33rUSUGUB65FgYrTRgRDjG6hdPHwfvHe+g/UrwVG8MsyLqSxg9TaW7Yuhjxu1v1OZ5xam2NU6+IpCN0xJO8Q==", + "version": "12.31.0", + "resolved": "https://registry.npmjs.org/@azure/storage-blob/-/storage-blob-12.31.0.tgz", + "integrity": "sha512-DBgNv10aCSxopt92DkTDD0o9xScXeBqPKGmR50FPZQaEcH4JLQ+GEOGEDv19V5BMkB7kxr+m4h6il/cCDPvmHg==", "dev": true, + "license": "MIT", "dependencies": { - "@azure/abort-controller": "^1.0.0", - "@azure/core-http": "^3.0.0", + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.9.0", + "@azure/core-client": "^1.9.3", + "@azure/core-http-compat": "^2.2.0", "@azure/core-lro": "^2.2.0", - "@azure/core-paging": "^1.1.1", - "@azure/core-tracing": "1.0.0-preview.13", - "@azure/logger": "^1.0.0", + "@azure/core-paging": "^1.6.2", + "@azure/core-rest-pipeline": "^1.19.1", + "@azure/core-tracing": "^1.2.0", + "@azure/core-util": "^1.11.0", + "@azure/core-xml": "^1.4.5", + "@azure/logger": "^1.1.4", + "@azure/storage-common": "^12.3.0", "events": "^3.0.0", - "tslib": "^2.2.0" + "tslib": "^2.8.1" }, "engines": { - "node": ">=14.0.0" + "node": ">=20.0.0" } }, - "node_modules/@azure/storage-blob/node_modules/@azure/core-http": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/@azure/core-http/-/core-http-3.0.0.tgz", - "integrity": "sha512-BxI2SlGFPPz6J1XyZNIVUf0QZLBKFX+ViFjKOkzqD18J1zOINIQ8JSBKKr+i+v8+MB6LacL6Nn/sP/TE13+s2Q==", + "node_modules/@azure/storage-blob/node_modules/@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", "dev": true, + "license": "MIT", "dependencies": { - "@azure/abort-controller": "^1.0.0", - "@azure/core-auth": "^1.3.0", - "@azure/core-tracing": "1.0.0-preview.13", - "@azure/core-util": "^1.1.1", - "@azure/logger": "^1.0.0", - "@types/node-fetch": "^2.5.0", - "@types/tunnel": "^0.0.3", - "form-data": "^4.0.0", - "node-fetch": "^2.6.7", - "process": "^0.11.10", - "tslib": "^2.2.0", - "tunnel": "^0.0.6", - "uuid": "^8.3.0", - "xml2js": "^0.4.19" + "tslib": "^2.6.2" }, "engines": { - "node": ">=14.0.0" + "node": ">=18.0.0" } }, - "node_modules/@azure/storage-blob/node_modules/form-data": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz", - "integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==", + "node_modules/@azure/storage-blob/node_modules/@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", "dev": true, + "license": "MIT", "dependencies": { - "asynckit": "^0.4.0", - "combined-stream": "^1.0.8", - "mime-types": "^2.1.12" + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" }, "engines": { - "node": ">= 6" + "node": ">=20.0.0" } }, - "node_modules/@azure/storage-blob/node_modules/uuid": { - "version": "8.3.2", - "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", - "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", + "node_modules/@azure/storage-blob/node_modules/@azure/core-http-compat": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/@azure/core-http-compat/-/core-http-compat-2.3.2.tgz", + "integrity": "sha512-Tf6ltdKzOJEgxZeWLCjMxrxbodB/ZeCbzzA1A2qHbhzAjzjHoBVSUeSl/baT/oHAxhc4qdqVaDKnc2+iE932gw==", "dev": true, - "bin": { - "uuid": "dist/bin/uuid" + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2" + }, + "engines": { + "node": ">=20.0.0" + }, + "peerDependencies": { + "@azure/core-client": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0" } }, - "node_modules/@azure/storage-blob/node_modules/xml2js": { - "version": "0.4.23", - "resolved": "https://registry.npmjs.org/xml2js/-/xml2js-0.4.23.tgz", - "integrity": "sha512-ySPiMjM0+pLDftHgXY4By0uswI3SPKLDw/i3UXbnO8M/p28zqexCUoPmQFrYD+/1BzhGJSs2i1ERWKJAtiLrug==", + "node_modules/@azure/storage-blob/node_modules/@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", "dev": true, + "license": "MIT", "dependencies": { - "sax": ">=0.6.0", - "xmlbuilder": "~11.0.0" + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" }, "engines": { - "node": ">=4.0.0" + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-blob/node_modules/@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-common": { + "version": "12.3.0", + "resolved": "https://registry.npmjs.org/@azure/storage-common/-/storage-common-12.3.0.tgz", + "integrity": "sha512-/OFHhy86aG5Pe8dP5tsp+BuJ25JOAl9yaMU3WZbkeoiFMHFtJ7tu5ili7qEdBXNW9G5lDB19trwyI6V49F/8iQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.9.0", + "@azure/core-http-compat": "^2.2.0", + "@azure/core-rest-pipeline": "^1.19.1", + "@azure/core-tracing": "^1.2.0", + "@azure/core-util": "^1.11.0", + "@azure/logger": "^1.1.4", + "events": "^3.3.0", + "tslib": "^2.8.1" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-common/node_modules/@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/@azure/storage-common/node_modules/@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-common/node_modules/@azure/core-http-compat": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/@azure/core-http-compat/-/core-http-compat-2.3.2.tgz", + "integrity": "sha512-Tf6ltdKzOJEgxZeWLCjMxrxbodB/ZeCbzzA1A2qHbhzAjzjHoBVSUeSl/baT/oHAxhc4qdqVaDKnc2+iE932gw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2" + }, + "engines": { + "node": ">=20.0.0" + }, + "peerDependencies": { + "@azure/core-client": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0" + } + }, + "node_modules/@azure/storage-common/node_modules/@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-common/node_modules/@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-file-datalake": { + "version": "12.29.0", + "resolved": "https://registry.npmjs.org/@azure/storage-file-datalake/-/storage-file-datalake-12.29.0.tgz", + "integrity": "sha512-iNod3ugGFGvYJ2891UhSoICYu8iM8Q2jdub5nBzVWtMQGtr3mBRnzXK/cZeuMsF3i63yXZZmDQSvIzj7xWyObw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.9.0", + "@azure/core-client": "^1.9.3", + "@azure/core-http-compat": "^2.0.0", + "@azure/core-paging": "^1.6.2", + "@azure/core-rest-pipeline": "^1.19.1", + "@azure/core-tracing": "^1.2.0", + "@azure/core-util": "^1.11.0", + "@azure/core-xml": "^1.4.3", + "@azure/logger": "^1.1.4", + "@azure/storage-blob": "^12.30.0", + "@azure/storage-common": "^12.2.0", + "events": "^3.3.0", + "tslib": "^2.8.1" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-file-datalake/node_modules/@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=18.0.0" + } + }, + "node_modules/@azure/storage-file-datalake/node_modules/@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-file-datalake/node_modules/@azure/core-http-compat": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/@azure/core-http-compat/-/core-http-compat-2.3.2.tgz", + "integrity": "sha512-Tf6ltdKzOJEgxZeWLCjMxrxbodB/ZeCbzzA1A2qHbhzAjzjHoBVSUeSl/baT/oHAxhc4qdqVaDKnc2+iE932gw==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2" + }, + "engines": { + "node": ">=20.0.0" + }, + "peerDependencies": { + "@azure/core-client": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0" + } + }, + "node_modules/@azure/storage-file-datalake/node_modules/@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@azure/storage-file-datalake/node_modules/@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "license": "MIT", + "dependencies": { + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" } }, "node_modules/@azure/storage-queue": { @@ -1808,6 +2097,42 @@ "url": "https://opencollective.com/typescript-eslint" } }, + "node_modules/@typespec/ts-http-runtime": { + "version": "0.3.4", + "resolved": "https://registry.npmjs.org/@typespec/ts-http-runtime/-/ts-http-runtime-0.3.4.tgz", + "integrity": "sha512-CI0NhTrz4EBaa0U+HaaUZrJhPoso8sG7ZFya8uQoBA57fjzrjRSv87ekCjLZOFExN+gXE/z0xuN2QfH4H2HrLQ==", + "license": "MIT", + "dependencies": { + "http-proxy-agent": "^7.0.0", + "https-proxy-agent": "^7.0.0", + "tslib": "^2.6.2" + }, + "engines": { + "node": ">=20.0.0" + } + }, + "node_modules/@typespec/ts-http-runtime/node_modules/agent-base": { + "version": "7.1.4", + "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", + "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==", + "license": "MIT", + "engines": { + "node": ">= 14" + } + }, + "node_modules/@typespec/ts-http-runtime/node_modules/https-proxy-agent": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", + "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", + "license": "MIT", + "dependencies": { + "agent-base": "^7.1.2", + "debug": "4" + }, + "engines": { + "node": ">= 14" + } + }, "node_modules/@ungap/structured-clone": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz", @@ -5099,23 +5424,38 @@ "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", "dev": true }, - "node_modules/fast-xml-parser": { - "version": "4.5.0", - "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-4.5.0.tgz", - "integrity": "sha512-/PlTQCI96+fZMAOLMZK4CWG1ItCbfZ/0jx7UIJFChPNrx7tcEgerUgWbeieCM9MfHInUDyK8DWYZ+YrywDJuTg==", + "node_modules/fast-xml-builder": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.4.tgz", + "integrity": "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg==", "dev": true, "funding": [ { "type": "github", "url": "https://github.com/sponsors/NaturalIntelligence" - }, + } + ], + "license": "MIT", + "dependencies": { + "path-expression-matcher": "^1.1.3" + } + }, + "node_modules/fast-xml-parser": { + "version": "5.5.6", + "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.5.6.tgz", + "integrity": "sha512-3+fdZyBRVg29n4rXP0joHthhcHdPUHaIC16cuyyd1iLsuaO6Vea36MPrxgAzbZna8lhvZeRL8Bc9GP56/J9xEw==", + "dev": true, + "funding": [ { - "type": "paypal", - "url": "https://paypal.me/naturalintelligence" + "type": "github", + "url": "https://github.com/sponsors/NaturalIntelligence" } ], + "license": "MIT", "dependencies": { - "strnum": "^1.0.5" + "fast-xml-builder": "^1.1.4", + "path-expression-matcher": "^1.1.3", + "strnum": "^2.1.2" }, "bin": { "fxparser": "src/cli/cli.js" @@ -8069,6 +8409,22 @@ "node": ">= 0.8" } }, + "node_modules/path-expression-matcher": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/path-expression-matcher/-/path-expression-matcher-1.1.3.tgz", + "integrity": "sha512-qdVgY8KXmVdJZRSS1JdEPOKPdTiEK/pi0RkcT2sw1RhXxohdujUlJFPuS1TSkevZ9vzd3ZlL7ULl1MHGTApKzQ==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/NaturalIntelligence" + } + ], + "license": "MIT", + "engines": { + "node": ">=14.0.0" + } + }, "node_modules/path-is-absolute": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", @@ -9624,10 +9980,17 @@ } }, "node_modules/strnum": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/strnum/-/strnum-1.0.5.tgz", - "integrity": "sha512-J8bbNyKKXl5qYcR36TIO8W3mVGVHrmmxsd5PAItGkmyzwJvybiw2IVq5nqd0i4LSNSkB/sx9VHllbfFdr9k1JA==", - "dev": true + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/strnum/-/strnum-2.2.0.tgz", + "integrity": "sha512-Y7Bj8XyJxnPAORMZj/xltsfo55uOiyHcU2tnAVzHUnSJR/KsEX+9RoDeXEnsXtl/CX4fAcrt64gZ13aGaWPeBg==", + "dev": true, + "funding": [ + { + "type": "github", + "url": "https://github.com/sponsors/NaturalIntelligence" + } + ], + "license": "MIT" }, "node_modules/supports-color": { "version": "5.5.0", @@ -9901,9 +10264,10 @@ } }, "node_modules/tslib": { - "version": "2.6.2", - "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.6.2.tgz", - "integrity": "sha512-AEYxH93jGFPn/a2iVAwW87VuUIkR1FVUKB77NwMF7nBTDkDrrT/Hpt/IrCJ0QXhW27jTBDcf5ZY7w6RiqTMw2Q==" + "version": "2.8.1", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", + "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==", + "license": "0BSD" }, "node_modules/tsutils": { "version": "3.21.0", @@ -10684,17 +11048,59 @@ } }, "@azure/core-client": { - "version": "1.5.0", - "resolved": "https://registry.npmjs.org/@azure/core-client/-/core-client-1.5.0.tgz", - "integrity": "sha512-YNk8i9LT6YcFdFO+RRU0E4Ef+A8Y5lhXo6lz61rwbG8Uo7kSqh0YqK04OexiilM43xd6n3Y9yBhLnb1NFNI9dA==", - "requires": { - "@azure/abort-controller": "^1.0.0", - "@azure/core-asynciterator-polyfill": "^1.0.0", - "@azure/core-auth": "^1.3.0", - "@azure/core-rest-pipeline": "^1.5.0", - "@azure/core-tracing": "1.0.0-preview.13", - "@azure/logger": "^1.0.0", - "tslib": "^2.2.0" + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-client/-/core-client-1.10.1.tgz", + "integrity": "sha512-Nh5PhEOeY6PrnxNPsEHRr9eimxLwgLlpmguQaHKBinFYA/RU9+kOYVOQqOrTsCL+KSxrLLl1gD8Dk5BFW/7l/w==", + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-rest-pipeline": "^1.22.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "tslib": "^2.6.2" + }, + "dependencies": { + "@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "requires": { + "tslib": "^2.6.2" + } + }, + "@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + } + }, + "@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + } + }, + "@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "requires": { + "tslib": "^2.6.2" + } + } } }, "@azure/core-http": { @@ -10767,11 +11173,11 @@ } }, "@azure/core-paging": { - "version": "1.1.1", - "resolved": "https://registry.npmjs.org/@azure/core-paging/-/core-paging-1.1.1.tgz", - "integrity": "sha512-hqEJBEGKan4YdOaL9ZG/GRG6PXaFd/Wb3SSjQW4LWotZzgl6xqG00h6wmkrpd2NNkbBkD1erLHBO3lPHApv+iQ==", + "version": "1.6.2", + "resolved": "https://registry.npmjs.org/@azure/core-paging/-/core-paging-1.6.2.tgz", + "integrity": "sha512-YKWi9YuCU04B55h25cnOYZHxXYtEvQEbKST5vqRga7hWY9ydd3FZHdeQF8pyh+acWZvppw13M/LMGx0LABUVMA==", "requires": { - "@azure/core-asynciterator-polyfill": "^1.0.0" + "tslib": "^2.6.2" } }, "@azure/core-rest-pipeline": { @@ -10834,32 +11240,33 @@ } }, "@azure/core-util": { - "version": "1.11.0", - "resolved": "https://registry.npmjs.org/@azure/core-util/-/core-util-1.11.0.tgz", - "integrity": "sha512-DxOSLua+NdpWoSqULhjDyAZTXFdP/LKkqtYuxxz1SCN289zk3OG8UOpnCQAz/tygyACBtWp/BoO72ptK7msY8g==", + "version": "1.13.1", + "resolved": "https://registry.npmjs.org/@azure/core-util/-/core-util-1.13.1.tgz", + "integrity": "sha512-XPArKLzsvl0Hf0CaGyKHUyVgF7oDnhKoP85Xv6M4StF/1AhfORhZudHtOyf2s+FcbuQ9dPRAjB8J2KvRRMUK2A==", "requires": { - "@azure/abort-controller": "^2.0.0", + "@azure/abort-controller": "^2.1.2", + "@typespec/ts-http-runtime": "^0.3.0", "tslib": "^2.6.2" }, "dependencies": { "@azure/abort-controller": { - "version": "2.0.0", - "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.0.0.tgz", - "integrity": "sha512-RP/mR/WJchR+g+nQFJGOec+nzeN/VvjlwbinccoqfhTsTHbb8X5+mLDp48kHT0ueyum0BNSwGm0kX0UZuIqTGg==", + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", "requires": { - "tslib": "^2.2.0" + "tslib": "^2.6.2" } } } }, "@azure/core-xml": { - "version": "1.2.0", - "resolved": "https://registry.npmjs.org/@azure/core-xml/-/core-xml-1.2.0.tgz", - "integrity": "sha512-oWWQUWfllD3RO8Ixnsw5RjAUWPitjRI+LXSM0KFmgkSjl0R6RTQzXU2SEMsgAENkD5nzyI4yPpTRJcN2svM6ug==", + "version": "1.5.0", + "resolved": "https://registry.npmjs.org/@azure/core-xml/-/core-xml-1.5.0.tgz", + "integrity": "sha512-D/sdlJBMJfx7gqoj66PKVmhDDaU6TKA49ptcolxdas29X7AfvLTmfAGLjAcIMBK7UZ2o4lygHIqVckOlQU3xWw==", "dev": true, "requires": { - "fast-xml-parser": "^4.0.1", - "tslib": "^2.2.0" + "fast-xml-parser": "^5.0.7", + "tslib": "^2.8.1" } }, "@azure/data-tables": { @@ -10961,18 +11368,12 @@ } }, "@azure/logger": { - "version": "1.0.0", - "resolved": "https://registry.npmjs.org/@azure/logger/-/logger-1.0.0.tgz", - "integrity": "sha512-g2qLDgvmhyIxR3JVS8N67CyIOeFRKQlX/llxYJQr1OSGQqM3HTpVP8MjmjcEKbL/OIt2N9C9UFaNQuKOw1laOA==", + "version": "1.3.0", + "resolved": "https://registry.npmjs.org/@azure/logger/-/logger-1.3.0.tgz", + "integrity": "sha512-fCqPIfOcLE+CGqGPd66c8bZpwAji98tZ4JI9i/mlTNTlsIWslCfpg48s/ypyLxZTump5sypjrKn2/kY7q8oAbA==", "requires": { - "tslib": "^1.9.3" - }, - "dependencies": { - "tslib": { - "version": "1.14.1", - "resolved": "https://registry.npmjs.org/tslib/-/tslib-1.14.1.tgz", - "integrity": "sha512-Xni35NKzjgMrwevysHTCArtLDpPvye8zV/0E4EyYn43P7/7qvQwPh9BGkHewbMulVntbigmcT7rdX3BNo9wRJg==" - } + "@typespec/ts-http-runtime": "^0.3.0", + "tslib": "^2.6.2" } }, "@azure/ms-rest-js": { @@ -11068,68 +11469,227 @@ } }, "@azure/storage-blob": { - "version": "12.16.0", - "resolved": "https://registry.npmjs.org/@azure/storage-blob/-/storage-blob-12.16.0.tgz", - "integrity": "sha512-jz33rUSUGUB65FgYrTRgRDjG6hdPHwfvHe+g/UrwVG8MsyLqSxg9TaW7Yuhjxu1v1OZ5xam2NU6+IpCN0xJO8Q==", + "version": "12.31.0", + "resolved": "https://registry.npmjs.org/@azure/storage-blob/-/storage-blob-12.31.0.tgz", + "integrity": "sha512-DBgNv10aCSxopt92DkTDD0o9xScXeBqPKGmR50FPZQaEcH4JLQ+GEOGEDv19V5BMkB7kxr+m4h6il/cCDPvmHg==", "dev": true, "requires": { - "@azure/abort-controller": "^1.0.0", - "@azure/core-http": "^3.0.0", + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.9.0", + "@azure/core-client": "^1.9.3", + "@azure/core-http-compat": "^2.2.0", "@azure/core-lro": "^2.2.0", - "@azure/core-paging": "^1.1.1", - "@azure/core-tracing": "1.0.0-preview.13", - "@azure/logger": "^1.0.0", + "@azure/core-paging": "^1.6.2", + "@azure/core-rest-pipeline": "^1.19.1", + "@azure/core-tracing": "^1.2.0", + "@azure/core-util": "^1.11.0", + "@azure/core-xml": "^1.4.5", + "@azure/logger": "^1.1.4", + "@azure/storage-common": "^12.3.0", "events": "^3.0.0", - "tslib": "^2.2.0" + "tslib": "^2.8.1" }, "dependencies": { - "@azure/core-http": { - "version": "3.0.0", - "resolved": "https://registry.npmjs.org/@azure/core-http/-/core-http-3.0.0.tgz", - "integrity": "sha512-BxI2SlGFPPz6J1XyZNIVUf0QZLBKFX+ViFjKOkzqD18J1zOINIQ8JSBKKr+i+v8+MB6LacL6Nn/sP/TE13+s2Q==", + "@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", "dev": true, "requires": { - "@azure/abort-controller": "^1.0.0", - "@azure/core-auth": "^1.3.0", - "@azure/core-tracing": "1.0.0-preview.13", - "@azure/core-util": "^1.1.1", - "@azure/logger": "^1.0.0", - "@types/node-fetch": "^2.5.0", - "@types/tunnel": "^0.0.3", - "form-data": "^4.0.0", - "node-fetch": "^2.6.7", - "process": "^0.11.10", - "tslib": "^2.2.0", - "tunnel": "^0.0.6", - "uuid": "^8.3.0", - "xml2js": "^0.4.19" + "tslib": "^2.6.2" } }, - "form-data": { - "version": "4.0.0", - "resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.0.tgz", - "integrity": "sha512-ETEklSGi5t0QMZuiXoA/Q6vcnxcLQP5vdugSpuAyi6SVGi2clPPp+xgEhuMaHC+zGgn31Kd235W35f7Hykkaww==", + "@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", "dev": true, "requires": { - "asynckit": "^0.4.0", - "combined-stream": "^1.0.8", - "mime-types": "^2.1.12" + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" } }, - "uuid": { - "version": "8.3.2", - "resolved": "https://registry.npmjs.org/uuid/-/uuid-8.3.2.tgz", - "integrity": "sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==", - "dev": true + "@azure/core-http-compat": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/@azure/core-http-compat/-/core-http-compat-2.3.2.tgz", + "integrity": "sha512-Tf6ltdKzOJEgxZeWLCjMxrxbodB/ZeCbzzA1A2qHbhzAjzjHoBVSUeSl/baT/oHAxhc4qdqVaDKnc2+iE932gw==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2" + } }, - "xml2js": { - "version": "0.4.23", - "resolved": "https://registry.npmjs.org/xml2js/-/xml2js-0.4.23.tgz", - "integrity": "sha512-ySPiMjM0+pLDftHgXY4By0uswI3SPKLDw/i3UXbnO8M/p28zqexCUoPmQFrYD+/1BzhGJSs2i1ERWKJAtiLrug==", + "@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", "dev": true, "requires": { - "sax": ">=0.6.0", - "xmlbuilder": "~11.0.0" + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + } + }, + "@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "requires": { + "tslib": "^2.6.2" + } + } + } + }, + "@azure/storage-common": { + "version": "12.3.0", + "resolved": "https://registry.npmjs.org/@azure/storage-common/-/storage-common-12.3.0.tgz", + "integrity": "sha512-/OFHhy86aG5Pe8dP5tsp+BuJ25JOAl9yaMU3WZbkeoiFMHFtJ7tu5ili7qEdBXNW9G5lDB19trwyI6V49F/8iQ==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.9.0", + "@azure/core-http-compat": "^2.2.0", + "@azure/core-rest-pipeline": "^1.19.1", + "@azure/core-tracing": "^1.2.0", + "@azure/core-util": "^1.11.0", + "@azure/logger": "^1.1.4", + "events": "^3.3.0", + "tslib": "^2.8.1" + }, + "dependencies": { + "@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "dev": true, + "requires": { + "tslib": "^2.6.2" + } + }, + "@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + } + }, + "@azure/core-http-compat": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/@azure/core-http-compat/-/core-http-compat-2.3.2.tgz", + "integrity": "sha512-Tf6ltdKzOJEgxZeWLCjMxrxbodB/ZeCbzzA1A2qHbhzAjzjHoBVSUeSl/baT/oHAxhc4qdqVaDKnc2+iE932gw==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2" + } + }, + "@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + } + }, + "@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "requires": { + "tslib": "^2.6.2" + } + } + } + }, + "@azure/storage-file-datalake": { + "version": "12.29.0", + "resolved": "https://registry.npmjs.org/@azure/storage-file-datalake/-/storage-file-datalake-12.29.0.tgz", + "integrity": "sha512-iNod3ugGFGvYJ2891UhSoICYu8iM8Q2jdub5nBzVWtMQGtr3mBRnzXK/cZeuMsF3i63yXZZmDQSvIzj7xWyObw==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.9.0", + "@azure/core-client": "^1.9.3", + "@azure/core-http-compat": "^2.0.0", + "@azure/core-paging": "^1.6.2", + "@azure/core-rest-pipeline": "^1.19.1", + "@azure/core-tracing": "^1.2.0", + "@azure/core-util": "^1.11.0", + "@azure/core-xml": "^1.4.3", + "@azure/logger": "^1.1.4", + "@azure/storage-blob": "^12.30.0", + "@azure/storage-common": "^12.2.0", + "events": "^3.3.0", + "tslib": "^2.8.1" + }, + "dependencies": { + "@azure/abort-controller": { + "version": "2.1.2", + "resolved": "https://registry.npmjs.org/@azure/abort-controller/-/abort-controller-2.1.2.tgz", + "integrity": "sha512-nBrLsEWm4J2u5LpAPjxADTlq3trDgVZZXHNKabeXZtpq3d3AbN/KGO82R87rdDz5/lYB024rtEf10/q0urNgsA==", + "dev": true, + "requires": { + "tslib": "^2.6.2" + } + }, + "@azure/core-auth": { + "version": "1.10.1", + "resolved": "https://registry.npmjs.org/@azure/core-auth/-/core-auth-1.10.1.tgz", + "integrity": "sha512-ykRMW8PjVAn+RS6ww5cmK9U2CyH9p4Q88YJwvUslfuMmN98w/2rdGRLPqJYObapBCdzBVeDgYWdJnFPFb7qzpg==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-util": "^1.13.0", + "tslib": "^2.6.2" + } + }, + "@azure/core-http-compat": { + "version": "2.3.2", + "resolved": "https://registry.npmjs.org/@azure/core-http-compat/-/core-http-compat-2.3.2.tgz", + "integrity": "sha512-Tf6ltdKzOJEgxZeWLCjMxrxbodB/ZeCbzzA1A2qHbhzAjzjHoBVSUeSl/baT/oHAxhc4qdqVaDKnc2+iE932gw==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2" + } + }, + "@azure/core-rest-pipeline": { + "version": "1.23.0", + "resolved": "https://registry.npmjs.org/@azure/core-rest-pipeline/-/core-rest-pipeline-1.23.0.tgz", + "integrity": "sha512-Evs1INHo+jUjwHi1T6SG6Ua/LHOQBCLuKEEE6efIpt4ZOoNonaT1kP32GoOcdNDbfqsD2445CPri3MubBy5DEQ==", + "dev": true, + "requires": { + "@azure/abort-controller": "^2.1.2", + "@azure/core-auth": "^1.10.0", + "@azure/core-tracing": "^1.3.0", + "@azure/core-util": "^1.13.0", + "@azure/logger": "^1.3.0", + "@typespec/ts-http-runtime": "^0.3.4", + "tslib": "^2.6.2" + } + }, + "@azure/core-tracing": { + "version": "1.3.1", + "resolved": "https://registry.npmjs.org/@azure/core-tracing/-/core-tracing-1.3.1.tgz", + "integrity": "sha512-9MWKevR7Hz8kNzzPLfX4EAtGM2b8mr50HPDBvio96bURP/9C+HjdH3sBlLSNNrvRAr5/k/svoH457gB5IKpmwQ==", + "dev": true, + "requires": { + "tslib": "^2.6.2" } } } @@ -11981,6 +12541,32 @@ "eslint-visitor-keys": "^3.3.0" } }, + "@typespec/ts-http-runtime": { + "version": "0.3.4", + "resolved": "https://registry.npmjs.org/@typespec/ts-http-runtime/-/ts-http-runtime-0.3.4.tgz", + "integrity": "sha512-CI0NhTrz4EBaa0U+HaaUZrJhPoso8sG7ZFya8uQoBA57fjzrjRSv87ekCjLZOFExN+gXE/z0xuN2QfH4H2HrLQ==", + "requires": { + "http-proxy-agent": "^7.0.0", + "https-proxy-agent": "^7.0.0", + "tslib": "^2.6.2" + }, + "dependencies": { + "agent-base": { + "version": "7.1.4", + "resolved": "https://registry.npmjs.org/agent-base/-/agent-base-7.1.4.tgz", + "integrity": "sha512-MnA+YT8fwfJPgBx3m60MNqakm30XOkyIoH1y6huTQvC0PwZG7ki8NacLBcrPbNoo8vEZy7Jpuk7+jMO+CUovTQ==" + }, + "https-proxy-agent": { + "version": "7.0.6", + "resolved": "https://registry.npmjs.org/https-proxy-agent/-/https-proxy-agent-7.0.6.tgz", + "integrity": "sha512-vK9P5/iUfdl95AI+JVyUuIcVtd4ofvtrOr3HNtM2yxC9bnMbEdp3x01OhQNnjb8IJYi38VlTE3mBXwcfvywuSw==", + "requires": { + "agent-base": "^7.1.2", + "debug": "4" + } + } + } + }, "@ungap/structured-clone": { "version": "1.2.0", "resolved": "https://registry.npmjs.org/@ungap/structured-clone/-/structured-clone-1.2.0.tgz", @@ -14630,13 +15216,24 @@ "integrity": "sha512-DCXu6Ifhqcks7TZKY3Hxp3y6qphY5SJZmrWMDrKcERSOXWQdMhU9Ig/PYrzyw/ul9jOIyh0N4M0tbC5hodg8dw==", "dev": true }, + "fast-xml-builder": { + "version": "1.1.4", + "resolved": "https://registry.npmjs.org/fast-xml-builder/-/fast-xml-builder-1.1.4.tgz", + "integrity": "sha512-f2jhpN4Eccy0/Uz9csxh3Nu6q4ErKxf0XIsasomfOihuSUa3/xw6w8dnOtCDgEItQFJG8KyXPzQXzcODDrrbOg==", + "dev": true, + "requires": { + "path-expression-matcher": "^1.1.3" + } + }, "fast-xml-parser": { - "version": "4.5.0", - "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-4.5.0.tgz", - "integrity": "sha512-/PlTQCI96+fZMAOLMZK4CWG1ItCbfZ/0jx7UIJFChPNrx7tcEgerUgWbeieCM9MfHInUDyK8DWYZ+YrywDJuTg==", + "version": "5.5.6", + "resolved": "https://registry.npmjs.org/fast-xml-parser/-/fast-xml-parser-5.5.6.tgz", + "integrity": "sha512-3+fdZyBRVg29n4rXP0joHthhcHdPUHaIC16cuyyd1iLsuaO6Vea36MPrxgAzbZna8lhvZeRL8Bc9GP56/J9xEw==", "dev": true, "requires": { - "strnum": "^1.0.5" + "fast-xml-builder": "^1.1.4", + "path-expression-matcher": "^1.1.3", + "strnum": "^2.1.2" } }, "fastq": { @@ -16808,6 +17405,12 @@ "resolved": "https://registry.npmjs.org/parseurl/-/parseurl-1.3.3.tgz", "integrity": "sha512-CiyeOxFT/JZyN5m0z9PfXw4SCBJ6Sygz1Dpl0wqjlhDEGGBP1GnsUVEL0p63hoG1fcj3fHynXi9NYO4nWOL+qQ==" }, + "path-expression-matcher": { + "version": "1.1.3", + "resolved": "https://registry.npmjs.org/path-expression-matcher/-/path-expression-matcher-1.1.3.tgz", + "integrity": "sha512-qdVgY8KXmVdJZRSS1JdEPOKPdTiEK/pi0RkcT2sw1RhXxohdujUlJFPuS1TSkevZ9vzd3ZlL7ULl1MHGTApKzQ==", + "dev": true + }, "path-is-absolute": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/path-is-absolute/-/path-is-absolute-1.0.1.tgz", @@ -17921,9 +18524,9 @@ "dev": true }, "strnum": { - "version": "1.0.5", - "resolved": "https://registry.npmjs.org/strnum/-/strnum-1.0.5.tgz", - "integrity": "sha512-J8bbNyKKXl5qYcR36TIO8W3mVGVHrmmxsd5PAItGkmyzwJvybiw2IVq5nqd0i4LSNSkB/sx9VHllbfFdr9k1JA==", + "version": "2.2.0", + "resolved": "https://registry.npmjs.org/strnum/-/strnum-2.2.0.tgz", + "integrity": "sha512-Y7Bj8XyJxnPAORMZj/xltsfo55uOiyHcU2tnAVzHUnSJR/KsEX+9RoDeXEnsXtl/CX4fAcrt64gZ13aGaWPeBg==", "dev": true }, "supports-color": { @@ -18136,9 +18739,9 @@ } }, "tslib": { - "version": "2.6.2", - "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.6.2.tgz", - "integrity": "sha512-AEYxH93jGFPn/a2iVAwW87VuUIkR1FVUKB77NwMF7nBTDkDrrT/Hpt/IrCJ0QXhW27jTBDcf5ZY7w6RiqTMw2Q==" + "version": "2.8.1", + "resolved": "https://registry.npmjs.org/tslib/-/tslib-2.8.1.tgz", + "integrity": "sha512-oJFu94HQb+KVduSUQL7wnpmqnfmLsOA/nAh6b6EH0wCEoK0/mPeXU6c3wKDV83MkOuHPRHtSXKKU99IBazS/2w==" }, "tsutils": { "version": "3.21.0", diff --git a/package.json b/package.json index 6df18cfcb..b53c110aa 100644 --- a/package.json +++ b/package.json @@ -49,6 +49,7 @@ "@azure/core-rest-pipeline": "^1.2.0", "@azure/data-tables": "^13.0.1", "@azure/storage-blob": "^12.9.0", + "@azure/storage-file-datalake": "^12.29.0", "@azure/storage-queue": "^12.8.0", "@types/args": "^5.0.0", "@types/async": "^3.0.1", @@ -303,6 +304,7 @@ "build:autorest:blob": "autorest ./swagger/blob.md --typescript --use=S:/GitHub/XiaoningLiu/autorest.typescript.server", "build:autorest:queue": "autorest ./swagger/queue.md --typescript --use=S:/GitHub/XiaoningLiu/autorest.typescript.server", "build:autorest:table": "autorest ./swagger/table.md --typescript --use=S:/GitHub/XiaoningLiu/autorest.typescript.server", + "build:autorest:dfs": "autorest ./swagger/dfs.md --typescript --use=S:/GitHub/XiaoningLiu/autorest.typescript.server", "build:exe": "node ./scripts/buildExe.js", "build:linux": "node ./scripts/buildLinux.js", "watch": "tsc -watch -p ./", @@ -350,4 +352,4 @@ "url": "https://github.com/azure/azurite/issues" }, "homepage": "https://github.com/azure/azurite#readme" -} \ No newline at end of file +} diff --git a/src/azurite.ts b/src/azurite.ts index e856c3e25..40926574f 100644 --- a/src/azurite.ts +++ b/src/azurite.ts @@ -33,29 +33,16 @@ function shutdown( queueServer: QueueServer, tableServer: TableServer ) { - const blobBeforeCloseMessage = `Azurite Blob service is closing...`; - const blobAfterCloseMessage = `Azurite Blob service successfully closed`; - const queueBeforeCloseMessage = `Azurite Queue service is closing...`; - const queueAfterCloseMessage = `Azurite Queue service successfully closed`; - const tableBeforeCloseMessage = `Azurite Table service is closing...`; - const tableAfterCloseMessage = `Azurite Table service successfully closed`; - AzuriteTelemetryClient.TraceStopEvent(); - console.log(blobBeforeCloseMessage); - blobServer.close().then(() => { - console.log(blobAfterCloseMessage); - }); + console.log(`Azurite Blob service is closing...`); + blobServer.close().then(() => console.log(`Azurite Blob service successfully closed`)); - console.log(queueBeforeCloseMessage); - queueServer.close().then(() => { - console.log(queueAfterCloseMessage); - }); + console.log(`Azurite Queue service is closing...`); + queueServer.close().then(() => console.log(`Azurite Queue service successfully closed`)); - console.log(tableBeforeCloseMessage); - tableServer.close().then(() => { - console.log(tableAfterCloseMessage); - }); + console.log(`Azurite Table service is closing...`); + tableServer.close().then(() => console.log(`Azurite Table service successfully closed`)); } /** @@ -65,7 +52,7 @@ async function main() { // Initialize and validate environment values from command line parameters const env = new Environment(); - + const location = await env.location(); await ensureDir(location); await access(location); @@ -127,51 +114,29 @@ async function main() { env.inMemoryPersistence(), ); - // We use logger singleton as global debugger logger to track detailed outputs cross layers - // Note that, debug log is different from access log which is only available in request handler layer to - // track every request. Access log is not singleton, and initialized in specific RequestHandlerFactory implementations - // Enable debug log by default before first release for debugging purpose Logger.configLogger(blobConfig.enableDebugLog, blobConfig.debugLogFilePath); - // Create queue server instance const queueServer = new QueueServer(queueConfig); - - // Create table server instance const tableServer = new TableServer(tableConfig); setExtentMemoryLimit(env, true); - // Start server - console.log( - `Azurite Blob service is starting at ${blobConfig.getHttpServerAddress()}` - ); + console.log(`Azurite Blob service is starting at ${blobConfig.getHttpServerAddress()}`); await blobServer.start(); - console.log( - `Azurite Blob service is successfully listening at ${blobServer.getHttpServerAddress()}` - ); + console.log(`Azurite Blob service is successfully listening at ${blobServer.getHttpServerAddress()}`); + console.log(`Azurite DFS service is available on the same port as the Blob service.`); - // Start server - console.log( - `Azurite Queue service is starting at ${queueConfig.getHttpServerAddress()}` - ); + console.log(`Azurite Queue service is starting at ${queueConfig.getHttpServerAddress()}`); await queueServer.start(); - console.log( - `Azurite Queue service is successfully listening at ${queueServer.getHttpServerAddress()}` - ); + console.log(`Azurite Queue service is successfully listening at ${queueServer.getHttpServerAddress()}`); - // Start server - console.log( - `Azurite Table service is starting at ${tableConfig.getHttpServerAddress()}` - ); + console.log(`Azurite Table service is starting at ${tableConfig.getHttpServerAddress()}`); await tableServer.start(); - console.log( - `Azurite Table service is successfully listening at ${tableServer.getHttpServerAddress()}` - ); - + console.log(`Azurite Table service is successfully listening at ${tableServer.getHttpServerAddress()}`); + AzuriteTelemetryClient.init(location, !env.disableTelemetry(), env); await AzuriteTelemetryClient.TraceStartEvent(); - // Handle close event process .once("message", (msg) => { if (msg === "shutdown") { diff --git a/src/blob/BlobConfiguration.ts b/src/blob/BlobConfiguration.ts index b77f94a4d..c6bdb32ad 100644 --- a/src/blob/BlobConfiguration.ts +++ b/src/blob/BlobConfiguration.ts @@ -45,6 +45,7 @@ export default class BlobConfiguration extends ConfigurationBase { disableProductStyleUrl: boolean = false, public readonly isMemoryPersistence: boolean = false, public readonly memoryStore?: MemoryExtentChunkStore, + public readonly enableHierarchicalNamespace: boolean = false, ) { super( host, diff --git a/src/blob/BlobEnvironment.ts b/src/blob/BlobEnvironment.ts index 19978e592..0d28a7821 100644 --- a/src/blob/BlobEnvironment.ts +++ b/src/blob/BlobEnvironment.ts @@ -69,6 +69,10 @@ if (!(args as any).config.name) { .option( ["", "disableTelemetry"], "Optional. Disable telemetry data collection of this Azurite execution. By default, Azurite will collect telemetry data to help improve the product." + ) + .option( + ["", "enableHierarchicalNamespace"], + "Optional. Enable hierarchical namespace (HNS) mode for ADLS Gen2. Default is true." ); (args as any).config.name = "azurite-blob"; @@ -169,6 +173,14 @@ export default class BlobEnvironment implements IBlobEnvironment { return this.flags.extentMemoryLimit; } + public enableHierarchicalNamespace(): boolean { + const val = this.flags.enableHierarchicalNamespace; + if (val !== undefined) { + return val !== false && val !== "false"; + } + return true; // default enabled + } + public async debug(): Promise { if (typeof this.flags.debug === "string") { // Enable debug log to file diff --git a/src/blob/BlobRequestListenerFactory.ts b/src/blob/BlobRequestListenerFactory.ts index 498bf4d6d..294d81bb0 100644 --- a/src/blob/BlobRequestListenerFactory.ts +++ b/src/blob/BlobRequestListenerFactory.ts @@ -34,6 +34,7 @@ import { OAuthLevel } from "../common/models"; import IAuthenticator from "./authentication/IAuthenticator"; import createStorageBlobContextMiddleware from "./middlewares/blobStorageContext.middleware"; import TelemetryMiddlewareFactory from "./middlewares/telemetry.middleware"; +import DfsRequestListenerFactory from "./DfsRequestListenerFactory"; /** * Default RequestListenerFactory based on express framework. @@ -56,12 +57,54 @@ export default class BlobRequestListenerFactory private readonly loose?: boolean, private readonly skipApiVersionCheck?: boolean, private readonly oauth?: OAuthLevel, - private readonly disableProductStyleUrl?: boolean + private readonly disableProductStyleUrl?: boolean, + private readonly enableHierarchicalNamespace: boolean = false ) { } public createRequestListener(): RequestListener { const app = express().disable("x-powered-by"); + // Mount DFS pipeline before the blob middleware chain. + // DFS requests are identified by ?resource=, ?action=, or x-ms-rename-source header — + // query params that are completely disjoint from the blob API (?comp=, ?restype=). + const dfsRouter = new DfsRequestListenerFactory( + this.metadataStore, + this.extentStore, + this.accountDataStore, + this.oauth, + this.enableHierarchicalNamespace, + this.skipApiVersionCheck, + this.disableProductStyleUrl + ).createRouter(); + + const dfsRawBodyParser = express.raw({ type: "*/*", limit: "256mb" }); + + app.use((req: express.Request, res: express.Response, next: express.NextFunction) => { + const resource = req.query.resource; + const action = req.query.action; + const renameSource = req.headers["x-ms-rename-source"]; + const leaseAction = req.headers["x-ms-lease-action"]; + const recursive = req.query.recursive; + const userAgent = (req.headers["user-agent"] ?? "").toLowerCase(); + // NOTE: user-agent sniffing is a portability limitation of the single-port architecture. + // Real Azure uses separate hostnames (*.blob. vs *.dfs.core.windows.net) to distinguish APIs. + // Plain HEAD/DELETE to a path carry no other DFS signal, so we rely on the DataLake SDK + // user-agent string. Any client whose UA contains "datalake" is routed to the DFS pipeline. + const isDataLakeSdk = userAgent.includes("datalake"); + // Requests with ?comp= are Blob API calls (e.g. PUT ?comp=metadata); never route them to DFS. + // Blob API leases always use ?comp=lease, so leaseAction without comp is a DFS lease. + // The ?recursive param is DFS-only (used by Path_Delete and Path_ListPaths). + const comp = req.query.comp; + if (!comp && (resource || action || renameSource || leaseAction || recursive !== undefined || isDataLakeSdk)) { + dfsRawBodyParser(req, res, (err?: unknown) => { + if (err) return next(err); + dfsRouter(req, res, next); + }); + } else { + next(); + } + }); + // MiddlewareFactory is a factory to create auto-generated middleware const middlewareFactory: MiddlewareFactory = new ExpressMiddlewareFactory( logger, @@ -84,7 +127,8 @@ export default class BlobRequestListenerFactory this.extentStore, logger, loose, - pageBlobRangesManager + pageBlobRangesManager, + this.enableHierarchicalNamespace ), blockBlobHandler: new BlockBlobHandler( this.metadataStore, @@ -99,7 +143,8 @@ export default class BlobRequestListenerFactory this.extentStore, logger, loose, - this.disableProductStyleUrl + this.disableProductStyleUrl, + this.enableHierarchicalNamespace ), pageBlobHandler: new PageBlobHandler( this.metadataStore, @@ -115,7 +160,8 @@ export default class BlobRequestListenerFactory this.extentStore, logger, loose, - this.disableProductStyleUrl + this.disableProductStyleUrl, + this.enableHierarchicalNamespace ) }; diff --git a/src/blob/BlobServer.ts b/src/blob/BlobServer.ts index 2f0e10dc3..f15b5d52f 100644 --- a/src/blob/BlobServer.ts +++ b/src/blob/BlobServer.ts @@ -39,10 +39,10 @@ const AFTER_CLOSE_MESSAGE = `Azurite Blob service successfully closed`; * @class Server */ export default class BlobServer extends ServerBase implements ICleaner { - private readonly metadataStore: IBlobMetadataStore; + public readonly metadataStore: IBlobMetadataStore; private readonly extentMetadataStore: IExtentMetadataStore; - private readonly extentStore: IExtentStore; - private readonly accountDataStore: IAccountDataStore; + public readonly extentStore: IExtentStore; + public readonly accountDataStore: IAccountDataStore; private readonly gcManager: IGCManager; /** @@ -110,7 +110,8 @@ export default class BlobServer extends ServerBase implements ICleaner { configuration.loose, configuration.skipApiVersionCheck, configuration.getOAuthLevel(), - configuration.disableProductStyleUrl + configuration.disableProductStyleUrl, + configuration.enableHierarchicalNamespace ); super(host, port, httpServer, requestListenerFactory, configuration); diff --git a/src/blob/BlobServerFactory.ts b/src/blob/BlobServerFactory.ts index 158456476..4d106aeb6 100644 --- a/src/blob/BlobServerFactory.ts +++ b/src/blob/BlobServerFactory.ts @@ -66,7 +66,8 @@ export class BlobServerFactory { env.key(), env.pwd(), env.oauth(), - env.disableProductStyleUrl() + env.disableProductStyleUrl(), + env.enableHierarchicalNamespace() ); return new SqlBlobServer(config); @@ -90,6 +91,8 @@ export class BlobServerFactory { env.oauth(), env.disableProductStyleUrl(), env.inMemoryPersistence(), + undefined, + env.enableHierarchicalNamespace(), ); return new BlobServer(config); diff --git a/src/blob/DfsRequestListenerFactory.ts b/src/blob/DfsRequestListenerFactory.ts new file mode 100644 index 000000000..5ab5ee4e1 --- /dev/null +++ b/src/blob/DfsRequestListenerFactory.ts @@ -0,0 +1,253 @@ +import express from "express"; + +import IAccountDataStore from "../common/IAccountDataStore"; +import IRequestListenerFactory from "../common/IRequestListenerFactory"; +import logger from "../common/Logger"; +import IExtentStore from "../common/persistence/IExtentStore"; +import { OAuthLevel } from "../common/models"; +import { RequestListener } from "../common/ServerBase"; +import IBlobMetadataStore from "./persistence/IBlobMetadataStore"; +import createDfsContextMiddleware, { getDfsContext } from "./dfs/DfsContext"; +import { DfsOperation } from "./dfs/DfsOperation"; +import createDfsAuthenticationMiddleware from "./dfs/DfsAuthenticationMiddleware"; +import FilesystemHandler from "./dfs/handlers/FilesystemHandler"; +import PathHandler from "./dfs/handlers/PathHandler"; +import { sendDfsError, internalError, hierarchicalNamespaceNotEnabled, filesystemNotFound } from "./dfs/DfsErrorFactory"; +import { createStorageContext } from "./dfs/DfsContextFactory"; +import { EMULATOR_ACCOUNT_NAME } from "./utils/constants"; + +/* + * Generated DFS layer at src/blob/generated-dfs/ provides: + * - OpenAPI/Swagger spec: swagger/dfs-storage-2023-11-03.json + * - AutoRest config: swagger/dfs.md + * - Typed interfaces: generated-dfs/handlers/IFilesystemHandler, IPathHandler + * - Operation enum: generated-dfs/artifacts/operation.ts + * - Models: generated-dfs/artifacts/models.ts + * - Dispatch specs: generated-dfs/artifacts/specifications.ts + * - Handler mappers: generated-dfs/handlers/handlerMappers.ts + * + * The concrete handlers (FilesystemHandler, PathHandler) currently use Express + * req/res directly. A future refactor can adapt them to the generated + * (options, context) → response pattern with a deserializer/serializer + * middleware layer, matching the blob endpoint architecture exactly. + */ + +/** + * DfsRequestListenerFactory creates the Express application for the DFS endpoint. + * + * Architecture follows the generated interface pattern from `src/blob/generated-dfs/`: + * + * 1. Context middleware — extracts account/filesystem/path from URL + * 2. Dispatch middleware — matches request to DfsOperation + * 3. Authentication — reuses blob SharedKey/SAS/OAuth authenticators + * 4. HNS validation — rejects DFS calls on non-HNS containers + * 5. Handler middleware — routes to handler method + * 6. Error middleware — DFS JSON error responses + * + * Handler implementations (FilesystemHandler, PathHandler) fulfill the contracts + * defined by the generated IFilesystemHandler and IPathHandler interfaces. + */ +export default class DfsRequestListenerFactory implements IRequestListenerFactory { + public constructor( + private readonly metadataStore: IBlobMetadataStore, + private readonly extentStore: IExtentStore, + private readonly accountDataStore: IAccountDataStore, + private readonly oauth?: OAuthLevel, + private readonly enableHierarchicalNamespace: boolean = true, + private readonly skipApiVersionCheck?: boolean, + private readonly disableProductStyleUrl?: boolean + ) {} + + /** + * Returns the DFS middleware pipeline as an Express Router. + * Raw body parsing is NOT included — callers that embed this router into a + * larger app (e.g. the blob server) must apply `express.raw()` themselves + * before delegating to this router. + */ + public createRouter(): express.Router { + const router = express.Router(); + + const filesystemHandler = new FilesystemHandler(this.metadataStore, this.enableHierarchicalNamespace); + const pathHandler = new PathHandler(this.metadataStore, this.extentStore, this.oauth); + + // 1. Parse DFS context (account, filesystem, path) + router.use(createDfsContextMiddleware(this.skipApiVersionCheck, this.disableProductStyleUrl)); + + // 2. Dispatch: determine DFS operation from request + router.use((req: express.Request, res: express.Response, next: express.NextFunction) => { + const ctx = getDfsContext(res); + const resource = req.query.resource as string | undefined; + const action = req.query.action as string | undefined; + const method = req.method.toUpperCase(); + + let operation: DfsOperation | undefined; + + if (resource === "account" && method === "GET") { + operation = DfsOperation.Filesystem_List; + } else if (resource === "filesystem") { + if (ctx.path && method === "GET") { + operation = DfsOperation.Filesystem_ListPaths; + } else if (ctx.path) { + // resource=filesystem with a non-empty path and non-GET method is invalid + operation = undefined; // will fall through to 400 UnsupportedOperation + } else { + switch (method) { + case "PUT": operation = DfsOperation.Filesystem_Create; break; + case "DELETE": operation = DfsOperation.Filesystem_Delete; break; + case "HEAD": operation = DfsOperation.Filesystem_GetProperties; break; + case "PATCH": operation = DfsOperation.Filesystem_SetProperties; break; + case "GET": operation = DfsOperation.Filesystem_ListPaths; break; + } + } + } else if (ctx.filesystem && ctx.path) { + const leaseAction = req.headers["x-ms-lease-action"] as string | undefined; + if (leaseAction) { + operation = DfsOperation.Path_Lease; + } else if (req.headers["x-ms-rename-source"] && method === "PUT") { + operation = DfsOperation.Path_Rename; + } else if (resource === "file" || resource === "directory") { + operation = DfsOperation.Path_Create; + } else if (method === "HEAD") { + operation = action === "getAccessControl" + ? DfsOperation.Path_GetAccessControl + : DfsOperation.Path_GetProperties; + } else if (method === "GET") { + operation = DfsOperation.Path_Read; + } else if (method === "DELETE") { + operation = DfsOperation.Path_Delete; + } else if (action) { + // PATCH with action (append, flush, setAccessControl, etc.) + operation = DfsOperation.Path_Update; + } else if (method === "PUT") { + operation = DfsOperation.Path_Create; + } else if (method === "PATCH") { + operation = DfsOperation.Path_Update; + } + } else if (ctx.filesystem && !ctx.path) { + switch (method) { + case "GET": operation = DfsOperation.Filesystem_ListPaths; break; + case "PUT": operation = DfsOperation.Filesystem_Create; break; + case "DELETE": operation = DfsOperation.Filesystem_Delete; break; + case "HEAD": operation = DfsOperation.Filesystem_GetProperties; break; + } + } + + if (operation) { + ctx.operation = operation; + } + + next(); + }); + + // 3. Authentication middleware + router.use(createDfsAuthenticationMiddleware( + this.accountDataStore, + this.metadataStore, + logger, + this.oauth + )); + + // 4. HNS validation: reject DFS operations on non-HNS containers. + // Filesystem_Create is exempt (container doesn't exist yet). + // Filesystem_List is exempt (not scoped to a single container). + router.use(async (req: express.Request, res: express.Response, next: express.NextFunction) => { + const ctx = getDfsContext(res); + const operation = ctx.operation; + + if ( + !ctx.filesystem || + operation === DfsOperation.Filesystem_Create || + operation === DfsOperation.Filesystem_List + ) { + return next(); + } + + const filesystem = ctx.filesystem; + try { + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const container = await this.metadataStore.getContainerProperties( + createStorageContext(ctx.requestId), + account, + filesystem + ); + if (!container) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + // When HNS is enabled server-wide, all containers are HNS-enabled. + // Only enforce the per-container opt-in check when the server flag is off. + if (!this.enableHierarchicalNamespace && + container.metadata?.["azurite_hns_enabled"] !== "true") { + return sendDfsError(res, hierarchicalNamespaceNotEnabled(filesystem)); + } + } catch (err: any) { + if (err.statusCode === 404) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + return sendDfsError(res, internalError(err.message)); + } + + next(); + }); + + // 5. Route to handler + router.use(async (req: express.Request, res: express.Response, next: express.NextFunction) => { + try { + const ctx = getDfsContext(res); + const operation = ctx.operation; + + switch (operation) { + case DfsOperation.Filesystem_Create: + return await filesystemHandler.create(req, res); + case DfsOperation.Filesystem_Delete: + return await filesystemHandler.delete(req, res); + case DfsOperation.Filesystem_GetProperties: + return await filesystemHandler.getProperties(req, res); + case DfsOperation.Filesystem_List: + return await filesystemHandler.list(req, res); + case DfsOperation.Filesystem_SetProperties: + return await filesystemHandler.setProperties(req, res); + case DfsOperation.Filesystem_ListPaths: + return await pathHandler.listPaths(req, res); + case DfsOperation.Path_Create: + case DfsOperation.Path_Rename: + return await pathHandler.create(req, res); + case DfsOperation.Path_Delete: + return await pathHandler.delete(req, res); + case DfsOperation.Path_GetProperties: + case DfsOperation.Path_GetAccessControl: + return await pathHandler.getProperties(req, res); + case DfsOperation.Path_Read: + return await pathHandler.read(req, res); + case DfsOperation.Path_Update: + return await pathHandler.update(req, res); + case DfsOperation.Path_Lease: + return await pathHandler.lease(req, res); + default: + res.status(400).json({ + error: { + code: "UnsupportedOperation", + message: `The requested operation is not supported.` + } + }); + } + } catch (error: any) { + next(error); + } + }); + + // 6. Error handler + router.use((error: Error, _req: express.Request, res: express.Response, _next: express.NextFunction) => { + sendDfsError(res, internalError(error.message)); + }); + + return router; + } + + public createRequestListener(): RequestListener { + const app = express().disable("x-powered-by"); + // Raw body parsing needed for append/update operations. + app.use(express.raw({ type: "*/*", limit: "256mb" })); + app.use(this.createRouter()); + return app; + } +} diff --git a/src/blob/IBlobEnvironment.ts b/src/blob/IBlobEnvironment.ts index a57700759..4c7e8a81c 100644 --- a/src/blob/IBlobEnvironment.ts +++ b/src/blob/IBlobEnvironment.ts @@ -15,4 +15,5 @@ export default interface IBlobEnvironment { inMemoryPersistence(): boolean; extentMemoryLimit(): number | undefined; disableTelemetry(): boolean; + enableHierarchicalNamespace(): boolean; } diff --git a/src/blob/SqlBlobConfiguration.ts b/src/blob/SqlBlobConfiguration.ts index a4e785812..97385390c 100644 --- a/src/blob/SqlBlobConfiguration.ts +++ b/src/blob/SqlBlobConfiguration.ts @@ -37,7 +37,8 @@ export default class SqlBlobConfiguration extends ConfigurationBase { key: string = "", pwd: string = "", oauth?: string, - disableProductStyleUrl: boolean = false + disableProductStyleUrl: boolean = false, + public readonly enableHierarchicalNamespace: boolean = false ) { super( host, diff --git a/src/blob/SqlBlobServer.ts b/src/blob/SqlBlobServer.ts index c0e07e6d3..ca40e1b2f 100644 --- a/src/blob/SqlBlobServer.ts +++ b/src/blob/SqlBlobServer.ts @@ -36,10 +36,10 @@ const AFTER_CLOSE_MESSAGE = `Azurite Blob service successfully closed`; * @class Server */ export default class SqlBlobServer extends ServerBase { - private readonly metadataStore: IBlobMetadataStore; + public readonly metadataStore: IBlobMetadataStore; private readonly extentMetadataStore: IExtentMetadataStore; - private readonly extentStore: IExtentStore; - private readonly accountDataStore: IAccountDataStore; + public readonly extentStore: IExtentStore; + public readonly accountDataStore: IAccountDataStore; private readonly gcManager: IGCManager; /** @@ -96,7 +96,8 @@ export default class SqlBlobServer extends ServerBase { configuration.loose, configuration.skipApiVersionCheck, configuration.getOAuthLevel(), - configuration.disableProductStyleUrl + configuration.disableProductStyleUrl, + configuration.enableHierarchicalNamespace ); super(host, port, httpServer, requestListenerFactory, configuration); diff --git a/src/blob/authentication/BlobTokenAuthenticator.ts b/src/blob/authentication/BlobTokenAuthenticator.ts index a9959a85a..f6a308e10 100644 --- a/src/blob/authentication/BlobTokenAuthenticator.ts +++ b/src/blob/authentication/BlobTokenAuthenticator.ts @@ -83,6 +83,7 @@ export default class BlobTokenAuthenticator implements IAuthenticator { switch (this.oauth) { case OAuthLevel.BASIC: + case OAuthLevel.ACL: return this.authenticateBasic(token, context); default: this.logger.warn( diff --git a/src/blob/context/BlobStorageContext.ts b/src/blob/context/BlobStorageContext.ts index 02c148175..4a4215e34 100644 --- a/src/blob/context/BlobStorageContext.ts +++ b/src/blob/context/BlobStorageContext.ts @@ -59,7 +59,7 @@ export default class BlobStorageContext extends Context return this.context.disableProductStyleUrl; } - public set disableProductStyleUrl(disableProductStyleUrl: boolean| undefined) { + public set disableProductStyleUrl(disableProductStyleUrl: boolean | undefined) { this.context.disableProductStyleUrl = disableProductStyleUrl; } @@ -67,7 +67,7 @@ export default class BlobStorageContext extends Context return this.context.loose; } - public set loose(loose: boolean| undefined) { + public set loose(loose: boolean | undefined) { this.context.loose = loose; } } diff --git a/src/blob/dfs/DfsAclEnforcer.ts b/src/blob/dfs/DfsAclEnforcer.ts new file mode 100644 index 000000000..182b3b3ec --- /dev/null +++ b/src/blob/dfs/DfsAclEnforcer.ts @@ -0,0 +1,217 @@ +/** + * ACL Enforcement for DFS (ADLS Gen2) operations. + * + * Phase III: When --oauth acl is enabled, checks the caller's identity + * against POSIX ACL entries stored on each path before allowing operations. + * + * ACL format follows Azure ADLS Gen2: + * "user::rwx,user:oid:r-x,group::r-x,other::---" + * + * Limitations (per wiki guidance): + * - No AAD group membership resolution + * - $superuser identity bypasses all ACL checks + * - Emulator mode (no identity) bypasses all checks + */ + +import { IDfsAuthenticatedIdentity } from "./DfsContext"; + +/** Required permission for an operation */ +export type AclPermission = "r" | "w" | "x"; + +/** + * Maps DFS operations to the minimum required permission. + */ +export function getRequiredPermission( + operationDescription: string +): AclPermission { + switch (operationDescription) { + case "read": + case "getProperties": + case "getAccessControl": + case "listPaths": + return "r"; + case "create": + case "delete": + case "update": + case "setAccessControl": + case "setProperties": + case "rename": + case "lease": + return "w"; + case "listChildren": + return "x"; + default: + return "r"; + } +} + +/** + * Parsed ACL entry. + * Format: "type:entityId:permissions" + * Examples: "user::rwx", "user:abc-123:r-x", "group::r--", "other::---" + */ +export interface AclEntry { + type: "user" | "group" | "mask" | "other"; + entityId: string; // empty string for default user/group/other + read: boolean; + write: boolean; + execute: boolean; +} + +/** + * Parse an ACL string into structured entries. + * ACL format: "user::rwx,user:abc:r-x,group::r-x,mask::rwx,other::---" + */ +export function parseAcl(aclString: string | undefined): AclEntry[] { + if (!aclString) return []; + + return aclString.split(",").filter(Boolean).map(entry => { + const parts = entry.split(":"); + if (parts.length < 3) return null; + + const type = parts[0] as AclEntry["type"]; + const entityId = parts[1]; + const perms = parts[2]; + + return { + type, + entityId, + read: perms.charAt(0) === "r", + write: perms.charAt(1) === "w", + execute: perms.charAt(2) === "x" + }; + }).filter((e): e is AclEntry => e !== null); +} + +/** + * Check if an ACL entry grants the required permission. + */ +function entryHasPermission(entry: AclEntry, permission: AclPermission): boolean { + switch (permission) { + case "r": return entry.read; + case "w": return entry.write; + case "x": return entry.execute; + } +} + +/** + * Result of an ACL check. + */ +export interface AclCheckResult { + allowed: boolean; + reason: string; +} + +/** + * Check whether the given identity is authorized for the required permission + * based on the path's ACL metadata. + * + * Algorithm follows the POSIX ACL evaluation order: + * 1. If owner matches identity → use owner permissions + * 2. If a named user entry matches identity → use that entry (masked) + * 3. If group matches → use group permissions (masked) + * 4. Fall through to other permissions + * + * Special cases: + * - $superuser always passes (emulator admin) + * - No identity (unauthenticated) always passes (emulator dev mode) + * - No ACL metadata → use default permissions (rwxr-x---) + */ +export function checkAcl( + identity: IDfsAuthenticatedIdentity | undefined, + owner: string | undefined, + group: string | undefined, + permissionsStr: string | undefined, + aclStr: string | undefined, + requiredPermission: AclPermission +): AclCheckResult { + // No identity = emulator/dev mode → bypass + if (!identity || (!identity.oid && !identity.upn)) { + return { allowed: true, reason: "No authenticated identity — emulator mode bypass" }; + } + + const callerId = identity.oid || identity.upn || ""; + const effectiveOwner = owner || "$superuser"; + + // $superuser caller bypasses all ACL checks + if (callerId === "$superuser") { + return { allowed: true, reason: "$superuser caller bypasses ACL checks" }; + } + + // Check if caller is the owner + if (callerId === effectiveOwner) { + // Use owner permissions from the permissions string (chars 0-2) + const perms = permissionsStr || "rwxr-x---"; + const ownerPerms: AclEntry = { + type: "user", + entityId: "", + read: perms.charAt(0) === "r", + write: perms.charAt(1) === "w", + execute: perms.charAt(2) === "x" + }; + if (entryHasPermission(ownerPerms, requiredPermission)) { + return { allowed: true, reason: "Owner permission granted" }; + } + return { allowed: false, reason: "Owner does not have required permission" }; + } + + // Parse ACL entries for named user/group matching + const aclEntries = parseAcl(aclStr); + + // Find mask entry (used to limit named user and group permissions) + const maskEntry = aclEntries.find(e => e.type === "mask" && e.entityId === ""); + + // Check named user entries + const namedUser = aclEntries.find( + e => e.type === "user" && e.entityId !== "" && e.entityId === callerId + ); + if (namedUser) { + const effective = maskEntry + ? entryHasPermission(namedUser, requiredPermission) && entryHasPermission(maskEntry, requiredPermission) + : entryHasPermission(namedUser, requiredPermission); + if (effective) { + return { allowed: true, reason: `Named user ACL entry matched (${callerId})` }; + } + return { allowed: false, reason: `Named user ACL entry matched but lacks permission` }; + } + + // Named group ACL entries (group::rwx) are intentionally skipped here. + // Resolving AAD group membership would require a live token/graph call, which is + // out of scope for an emulator. Callers relying on named group ACLs will fall + // through to "other" permissions, which is a known and documented limitation. + + // Check the owning group (only if the caller's OID/UPN matches the group identifier) + const effectiveGroup = group || "$superuser"; + if (callerId === effectiveGroup) { + const perms = permissionsStr || "rwxr-x---"; + const groupPerms: AclEntry = { + type: "group", + entityId: "", + read: perms.charAt(3) === "r", + write: perms.charAt(4) === "w", + execute: perms.charAt(5) === "x" + }; + const effective = maskEntry + ? entryHasPermission(groupPerms, requiredPermission) && entryHasPermission(maskEntry, requiredPermission) + : entryHasPermission(groupPerms, requiredPermission); + if (effective) { + return { allowed: true, reason: "Group permission granted" }; + } + return { allowed: false, reason: "Group does not have required permission" }; + } + + // Fall through to "other" permissions (chars 6-8) + const perms = permissionsStr || "rwxr-x---"; + const otherPerms: AclEntry = { + type: "other", + entityId: "", + read: perms.charAt(6) === "r", + write: perms.charAt(7) === "w", + execute: perms.charAt(8) === "x" + }; + if (entryHasPermission(otherPerms, requiredPermission)) { + return { allowed: true, reason: "Other permission granted" }; + } + + return { allowed: false, reason: "Insufficient ACL permissions" }; +} diff --git a/src/blob/dfs/DfsAuthenticationMiddleware.ts b/src/blob/dfs/DfsAuthenticationMiddleware.ts new file mode 100644 index 000000000..33c1dad19 --- /dev/null +++ b/src/blob/dfs/DfsAuthenticationMiddleware.ts @@ -0,0 +1,167 @@ +import { decode } from "jsonwebtoken"; +import { NextFunction, Request, RequestHandler, Response } from "express"; + +import IAccountDataStore from "../../common/IAccountDataStore"; +import ILogger from "../../common/ILogger"; +import IAuthenticator from "../authentication/IAuthenticator"; +import AccountSASAuthenticator from "../authentication/AccountSASAuthenticator"; +import BlobSASAuthenticator from "../authentication/BlobSASAuthenticator"; +import BlobSharedKeyAuthenticator from "../authentication/BlobSharedKeyAuthenticator"; +import BlobTokenAuthenticator from "../authentication/BlobTokenAuthenticator"; +import BlobStorageContext from "../context/BlobStorageContext"; +import ExpressRequestAdapter from "../generated/ExpressRequestAdapter"; + +import Operation from "../generated/artifacts/operation"; +import IBlobMetadataStore from "../persistence/IBlobMetadataStore"; +import { getDfsContext, IDfsAuthenticatedIdentity } from "./DfsContext"; +import { DfsOperation } from "./DfsOperation"; +import { sendDfsError } from "./DfsErrorFactory"; +import { OAuthLevel } from "../../common/models"; +import { BEARER_TOKEN_PREFIX } from "../../common/utils/constants"; + +const DEFAULT_CONTEXT_PATH = "dfs_blob_context"; + +/** + * Maps DFS operations to blob operations for SAS permission checking. + */ +function mapDfsOperationToBlobOperation(op?: DfsOperation): Operation { + switch (op) { + case DfsOperation.Filesystem_Create: + return Operation.Container_Create; + case DfsOperation.Filesystem_Delete: + return Operation.Container_Delete; + case DfsOperation.Filesystem_GetProperties: + return Operation.Container_GetProperties; + case DfsOperation.Filesystem_SetProperties: + return Operation.Container_SetMetadata; + case DfsOperation.Filesystem_List: + return Operation.Service_ListContainersSegment; + case DfsOperation.Filesystem_ListPaths: + return Operation.Container_ListBlobHierarchySegment; + case DfsOperation.Path_Create: + case DfsOperation.Path_Rename: + return Operation.BlockBlob_Upload; + case DfsOperation.Path_Delete: + return Operation.Blob_Delete; + case DfsOperation.Path_GetProperties: + case DfsOperation.Path_GetAccessControl: + return Operation.Blob_GetProperties; + case DfsOperation.Path_Read: + return Operation.Blob_Download; + case DfsOperation.Path_Update: + return Operation.BlockBlob_StageBlock; + case DfsOperation.Path_Lease: + return Operation.Blob_AcquireLease; + default: + return Operation.Blob_GetProperties; + } +} + +/** + * Extracts identity claims from a Bearer JWT token. + * Returns undefined if the token is not a Bearer token or can't be decoded. + */ +function extractIdentityFromRequest(req: Request): IDfsAuthenticatedIdentity | undefined { + const authHeader = req.header("authorization"); + if (!authHeader || !authHeader.startsWith(BEARER_TOKEN_PREFIX)) { + return undefined; + } + + const token = authHeader.substring(BEARER_TOKEN_PREFIX.length + 1); + try { + const decoded = decode(token) as { [key: string]: any } | null; + if (!decoded) return undefined; + + return { + oid: decoded.oid as string | undefined, + upn: decoded.upn as string | undefined, + tid: decoded.tid as string | undefined, + appid: decoded.appid as string | undefined + }; + } catch { + return undefined; + } +} + +export default function createDfsAuthenticationMiddleware( + accountDataStore: IAccountDataStore, + metadataStore: IBlobMetadataStore, + logger: ILogger, + oauth?: OAuthLevel +): RequestHandler { + const authenticators: IAuthenticator[] = [ + new BlobSharedKeyAuthenticator(accountDataStore, logger), + new AccountSASAuthenticator(accountDataStore, metadataStore, logger), + new BlobSASAuthenticator(accountDataStore, metadataStore, logger) + ]; + if (oauth !== undefined) { + authenticators.push( + new BlobTokenAuthenticator(accountDataStore, oauth, logger) + ); + } + + return async (req: Request, res: Response, next: NextFunction) => { + const dfsCtx = getDfsContext(res); + + // Build a BlobStorageContext that the existing authenticators can use + const holder: any = {}; + const blobContext = new BlobStorageContext(holder, DEFAULT_CONTEXT_PATH); + blobContext.startTime = dfsCtx.startTime; + blobContext.contextId = dfsCtx.requestId; + blobContext.account = dfsCtx.account; + blobContext.container = dfsCtx.filesystem; + blobContext.blob = dfsCtx.path; + blobContext.authenticationPath = dfsCtx.authenticationPath; + blobContext.loose = true; // DFS operations use loose mode for SAS validation + + // Set the blob operation for SAS permission checking + blobContext.operation = mapDfsOperationToBlobOperation(dfsCtx.operation); + + const request = new ExpressRequestAdapter(req); + + try { + let pass = false; + for (const authenticator of authenticators) { + const result = await authenticator.validate(request, blobContext); + if (result === true) { + pass = true; + break; + } + } + + if (!pass) { + const hasAuth = req.header("authorization") !== undefined; + const hasSas = req.query.sig !== undefined; + if (!hasAuth && !hasSas && oauth === undefined) { + // No credentials and no OAuth requirement — pass through (emulator dev mode) + return next(); + } + + sendDfsError(res, { + statusCode: 403, + code: "AuthorizationFailure", + message: "Server failed to authenticate the request." + }); + return; + } + + // When ACL mode is enabled, extract identity from bearer token + // so ACL enforcement can check permissions downstream + if (oauth === OAuthLevel.ACL) { + dfsCtx.identity = extractIdentityFromRequest(req); + } + + next(); + } catch (error: any) { + if (error.statusCode) { + sendDfsError(res, { + statusCode: error.statusCode, + code: error.storageErrorCode || "AuthenticationFailed", + message: error.storageErrorMessage || error.message + }); + } else { + next(error); + } + } + }; +} diff --git a/src/blob/dfs/DfsContext.ts b/src/blob/dfs/DfsContext.ts new file mode 100644 index 000000000..bb77d9e8b --- /dev/null +++ b/src/blob/dfs/DfsContext.ts @@ -0,0 +1,165 @@ +import uuid from "uuid/v4"; +import { NextFunction, Request, RequestHandler, Response } from "express"; + +import logger from "../../common/Logger"; +import { IP_REGEX, NO_ACCOUNT_HOST_NAMES } from "../../common/utils/constants"; +import { SECONDARY_SUFFIX, HeaderConstants, ValidAPIVersions, VERSION, EMULATOR_ACCOUNT_NAME } from "../utils/constants"; +import { checkApiVersion } from "../utils/utils"; +import { DfsOperation } from "./DfsOperation"; +import { sendDfsError } from "./DfsErrorFactory"; + +/** + * Identity extracted from an OAuth bearer token. + * Used for ACL enforcement in Phase III (--oauth acl). + */ +export interface IDfsAuthenticatedIdentity { + /** Azure AD object ID (oid claim) */ + oid?: string; + /** User principal name (upn claim) */ + upn?: string; + /** Tenant ID (tid claim) */ + tid?: string; + /** Application ID (appid claim) */ + appid?: string; +} + +export interface IDfsContext { + requestId: string; + startTime: Date; + account?: string; + filesystem?: string; + path?: string; + isSecondary?: boolean; + operation?: DfsOperation; + authenticationPath?: string; + /** Authenticated identity from OAuth token — populated when --oauth acl is enabled */ + identity?: IDfsAuthenticatedIdentity; +} + +const DFS_CONTEXT_KEY = "dfsContext"; + +export function getDfsContext(res: Response): IDfsContext { + return res.locals[DFS_CONTEXT_KEY]; +} + +export default function createDfsContextMiddleware( + skipApiVersionCheck?: boolean, + disableProductStyleUrl?: boolean +): RequestHandler { + return (req: Request, res: Response, next: NextFunction) => { + res.setHeader(HeaderConstants.SERVER, `Azurite-DFS/${VERSION}`); + const requestId = uuid(); + + if (!skipApiVersionCheck) { + const apiVersion = req.header(HeaderConstants.X_MS_VERSION); + if (apiVersion !== undefined) { + try { + checkApiVersion(apiVersion, ValidAPIVersions, requestId); + } catch (error) { + next(error); + return; + } + } + } + + const context: IDfsContext = { + requestId, + startTime: new Date() + }; + + const [account, filesystem, path, isSecondary] = extractDfsPartsFromPath( + req.hostname, + req.path, + disableProductStyleUrl + ); + + context.account = account; + context.filesystem = filesystem; + context.path = path; + context.isSecondary = isSecondary; + context.authenticationPath = req.path; + + if (isSecondary && context.authenticationPath) { + const pos = context.authenticationPath.search(SECONDARY_SUFFIX); + if (pos !== -1) { + context.authenticationPath = + context.authenticationPath.substr(0, pos) + + context.authenticationPath.substr(pos + SECONDARY_SUFFIX.length); + } + } + + res.locals[DFS_CONTEXT_KEY] = context; + + logger.info( + `DfsContextMiddleware: RequestMethod=${req.method} RequestURL=${req.protocol}://${req.hostname}${req.url} ClientIP=${req.ip}`, + requestId + ); + logger.info( + `DfsContextMiddleware: Account=${account} Filesystem=${filesystem} Path=${path}`, + requestId + ); + + if (!account) { + sendDfsError(res, { statusCode: 400, code: "InvalidQueryParameterValue", message: "Account name is required." }); + return; + } + + next(); + }; +} + +function extractDfsPartsFromPath( + hostname: string, + path: string, + disableProductStyleUrl?: boolean +): [string | undefined, string | undefined, string | undefined, boolean] { + let account: string | undefined; + let filesystem: string | undefined; + let blobPath: string | undefined; + let isSecondary = false; + + const decodedPath = decodeURIComponent(path); + const normalizedPath = decodedPath.startsWith("/") + ? decodedPath.substr(1) + : decodedPath; + + const parts = normalizedPath.split("/"); + let urlPartIndex = 0; + + const isIPAddress = IP_REGEX.test(hostname); + const isNoAccountHostName = NO_ACCOUNT_HOST_NAMES.has(hostname.toLowerCase()); + const firstDotIndex = hostname.indexOf("."); + + if (!disableProductStyleUrl && !isIPAddress && !isNoAccountHostName && firstDotIndex > 0) { + account = hostname.substring(0, firstDotIndex); + } else { + account = parts[urlPartIndex++]; + // The DataLake SDK constructs destination URLs for move() as //, + // omitting the account name when the base URL is IP-based. Detect this by checking + // whether the first segment is the emulator account; if not, fall back to it. + if ((isIPAddress || isNoAccountHostName) && account && account !== EMULATOR_ACCOUNT_NAME) { + // Treat the first segment as the filesystem, not the account + filesystem = account; + account = EMULATOR_ACCOUNT_NAME; + blobPath = parts.slice(urlPartIndex).join("/").replace(/\\/g, "/"); + return [account, filesystem, blobPath || undefined, isSecondary]; + } + } + + filesystem = parts[urlPartIndex++]; + blobPath = parts + .slice(urlPartIndex) + .join("/") + .replace(/\\/g, "/"); + + if (account && account.endsWith(SECONDARY_SUFFIX)) { + account = account.substr(0, account.length - SECONDARY_SUFFIX.length); + isSecondary = true; + } + + // Empty strings become undefined + if (!filesystem) filesystem = undefined; + if (!blobPath) blobPath = undefined; + + return [account, filesystem, blobPath, isSecondary]; +} diff --git a/src/blob/dfs/DfsContextFactory.ts b/src/blob/dfs/DfsContextFactory.ts new file mode 100644 index 000000000..ac477e193 --- /dev/null +++ b/src/blob/dfs/DfsContextFactory.ts @@ -0,0 +1,14 @@ +import Context from "../generated/Context"; + +/** + * Creates a minimal Context object suitable for passing to IBlobMetadataStore methods. + * DFS handlers don't go through the generated middleware pipeline, so we create + * context objects manually with the required fields. + */ +export function createStorageContext(requestId?: string): Context { + const holder: any = {}; + const ctx = new Context(holder, "ctx"); + ctx.startTime = new Date(); + ctx.contextId = requestId; + return ctx; +} diff --git a/src/blob/dfs/DfsErrorFactory.ts b/src/blob/dfs/DfsErrorFactory.ts new file mode 100644 index 000000000..0a25d0dc0 --- /dev/null +++ b/src/blob/dfs/DfsErrorFactory.ts @@ -0,0 +1,128 @@ +import { Response } from "express"; + +export interface DfsError { + statusCode: number; + code: string; + message: string; +} + +export function sendDfsError(res: Response, error: DfsError): void { + res.status(error.statusCode); + res.setHeader("x-ms-error-code", error.code); + + // HEAD requests must not include a response body. Sending Content-Type: application/json + // with an empty body causes Azure SDKs to crash trying to parse JSON from nothing. + // Use res.req (set by express) to detect HEAD without requiring callers to pass req. + if (res.req && res.req.method === "HEAD") { + res.setHeader("Content-Length", "0"); + res.end(); + } else { + res.json({ + error: { code: error.code, message: error.message } + }); + } +} + +export function filesystemNotFound(filesystem: string): DfsError { + return { + statusCode: 404, + code: "FilesystemNotFound", + message: `The specified filesystem does not exist. Filesystem: ${filesystem}` + }; +} + +export function pathNotFound(path: string): DfsError { + return { + statusCode: 404, + code: "PathNotFound", + message: `The specified path does not exist. Path: ${path}` + }; +} + +export function pathAlreadyExists(path: string): DfsError { + return { + statusCode: 409, + code: "PathAlreadyExists", + message: `The specified path already exists. Path: ${path}` + }; +} + +export function directoryNotEmpty(path: string): DfsError { + return { + statusCode: 409, + code: "DirectoryNotEmpty", + message: `The recursive query parameter value must be true to delete a non-empty directory. Path: ${path}` + }; +} + +export function invalidSourceOrDestination(message: string): DfsError { + return { + statusCode: 400, + code: "InvalidSourceUri", + message + }; +} + +export function invalidFlushPosition(actual: number, expected: number): DfsError { + return { + statusCode: 409, + code: "InvalidFlushPosition", + message: `The flush position ${actual} does not match the length of the data staged for the file (${expected}).` + }; +} + +export function conditionNotMet(): DfsError { + return { + statusCode: 412, + code: "ConditionNotMet", + message: "The condition specified using HTTP conditional header(s) is not met." + }; +} + +export function leaseIdMissing(): DfsError { + return { + statusCode: 412, + code: "LeaseIdMissing", + message: "There is currently a lease on the resource and no lease ID was specified in the request." + }; +} + +export function leaseNotPresent(): DfsError { + return { + statusCode: 409, + code: "LeaseNotPresentWithLeaseOperation", + message: "There is currently no lease on the resource." + }; +} + +export function leaseAlreadyPresent(): DfsError { + return { + statusCode: 409, + code: "LeaseAlreadyPresent", + message: "There is already a lease present." + }; +} + +export function leaseIdMismatch(): DfsError { + return { + statusCode: 409, + code: "LeaseIdMismatchWithLeaseOperation", + message: "The lease ID specified did not match the lease ID for the resource." + }; +} + +export function hierarchicalNamespaceNotEnabled(filesystem: string): DfsError { + return { + statusCode: 400, + code: "HierarchicalNamespaceNotEnabled", + message: `The account associated with the filesystem does not have hierarchical namespace enabled. Filesystem: ${filesystem}` + }; +} + +export function internalError(message: string): DfsError { + return { + statusCode: 500, + code: "InternalError", + message + }; +} diff --git a/src/blob/dfs/DfsOperation.ts b/src/blob/dfs/DfsOperation.ts new file mode 100644 index 000000000..6cfa416b4 --- /dev/null +++ b/src/blob/dfs/DfsOperation.ts @@ -0,0 +1,16 @@ +export enum DfsOperation { + Filesystem_Create = "Filesystem_Create", + Filesystem_Delete = "Filesystem_Delete", + Filesystem_GetProperties = "Filesystem_GetProperties", + Filesystem_SetProperties = "Filesystem_SetProperties", + Filesystem_List = "Filesystem_List", + Filesystem_ListPaths = "Filesystem_ListPaths", + Path_Create = "Path_Create", + Path_Delete = "Path_Delete", + Path_GetProperties = "Path_GetProperties", + Path_GetAccessControl = "Path_GetAccessControl", + Path_Read = "Path_Read", + Path_Update = "Path_Update", + Path_Rename = "Path_Rename", + Path_Lease = "Path_Lease" +} diff --git a/src/blob/dfs/handlers/FilesystemHandler.ts b/src/blob/dfs/handlers/FilesystemHandler.ts new file mode 100644 index 000000000..91416eae9 --- /dev/null +++ b/src/blob/dfs/handlers/FilesystemHandler.ts @@ -0,0 +1,247 @@ +import { Request, Response } from "express"; + +import logger from "../../../common/Logger"; +import IBlobMetadataStore from "../../persistence/IBlobMetadataStore"; +import { getDfsContext } from "../DfsContext"; +import { createStorageContext } from "../DfsContextFactory"; +import { sendDfsError, filesystemNotFound, internalError } from "../DfsErrorFactory"; +import { EMULATOR_ACCOUNT_NAME, BLOB_API_VERSION } from "../../utils/constants"; +import { newEtag } from "../../../common/utils/utils"; +import * as Models from "../../generated/artifacts/models"; + +export default class FilesystemHandler { + public constructor( + private readonly metadataStore: IBlobMetadataStore, + private readonly enableHierarchicalNamespace: boolean = true + ) {} + + public async create(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const now = new Date(); + const etag = newEtag(); + + try { + const userMetadata = this.extractMetadata(req) ?? {}; + // Honor x-ms-namespace-enabled if provided; fall back to server-wide default + const hnsHeader = req.headers["x-ms-namespace-enabled"] as string | undefined; + const hns = hnsHeader !== undefined ? hnsHeader === "true" : this.enableHierarchicalNamespace; + userMetadata["azurite_hns_enabled"] = String(hns); + + const result = await this.metadataStore.createContainer(createStorageContext(ctx.requestId), { + accountName: account, + name: filesystem, + metadata: userMetadata, + properties: { + lastModified: now, + etag, + leaseStatus: Models.LeaseStatusType.Unlocked, + leaseState: Models.LeaseStateType.Available, + hasImmutabilityPolicy: false, + hasLegalHold: false + } + } as any); + + res.status(201); + res.setHeader("ETag", result.properties.etag); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("x-ms-namespace-enabled", String(hns)); + res.end(); + } catch (error: any) { + if (error.statusCode === 409) { + return sendDfsError(res, { + statusCode: 409, + code: "FilesystemAlreadyExists", + message: `The specified filesystem already exists.` + }); + } + logger.error(`FilesystemHandler.create error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async delete(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + + try { + await this.metadataStore.deleteContainer( + createStorageContext(ctx.requestId), + account, + filesystem + ); + + res.status(202); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + logger.error(`FilesystemHandler.delete error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async getProperties(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + + try { + const result = await this.metadataStore.getContainerProperties( + createStorageContext(ctx.requestId), + account, + filesystem + ); + + res.status(200); + res.setHeader("ETag", result.properties.etag); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("x-ms-resource-type", "filesystem"); + // Read per-container HNS flag; fall back to server default when absent (C-3) + const hns = result.metadata?.["azurite_hns_enabled"] === "true" || + (result.metadata?.["azurite_hns_enabled"] === undefined && this.enableHierarchicalNamespace); + res.setHeader("x-ms-namespace-enabled", String(hns)); + + if (result.metadata) { + // Filter internal reserved key before emitting x-ms-properties (C-4) + const internalKeys = new Set(["azurite_hns_enabled"]); + const properties = Object.entries(result.metadata) + .filter(([key]) => !internalKeys.has(key)) + .map(([key, value]) => `${key}=${Buffer.from(value).toString("base64")}`) + .join(","); + if (properties) { + res.setHeader("x-ms-properties", properties); + } + } + + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + logger.error(`FilesystemHandler.getProperties error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async list(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + + const prefix = req.query.prefix as string | undefined; + const continuation = req.query.continuation as string | undefined; + const maxResults = Math.max(1, Math.min(5000, parseInt(req.query.maxResults as string, 10) || 5000)); + + try { + const [containers, nextMarker] = await this.metadataStore.listContainers( + createStorageContext(ctx.requestId), + account, + prefix, + maxResults, + continuation + ); + + const filesystems = containers.map(c => ({ + name: c.name, + lastModified: c.properties.lastModified.toUTCString(), + eTag: c.properties.etag + })); + + res.status(200); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + if (nextMarker) { + res.setHeader("x-ms-continuation", String(nextMarker)); + } + + res.json({ filesystems }); + } catch (error: any) { + logger.error(`FilesystemHandler.list error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async setProperties(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const now = new Date(); + + try { + // Start from existing metadata to preserve azurite_hns_enabled and other keys (P2-C-1, P2-m-3) + const existing = await this.metadataStore.getContainerProperties( + createStorageContext(ctx.requestId), account, filesystem + ); + const metadata: { [key: string]: string } = { ...(existing.metadata || {}) }; + + // Overlay x-ms-meta-* headers (filter reserved key to prevent forgery) + const metaFromHeaders = this.extractMetadata(req) || {}; + for (const [k, v] of Object.entries(metaFromHeaders)) { + if (k !== "azurite_hns_enabled") metadata[k] = v; + } + + // Parse x-ms-properties header; filter reserved internal key + const propertiesHeader = req.headers["x-ms-properties"] as string | undefined; + if (propertiesHeader) { + const pairs = propertiesHeader.split(","); + for (const pair of pairs) { + const eqIdx = pair.indexOf("="); + if (eqIdx >= 0) { + const key = pair.substring(0, eqIdx); + if (key !== "azurite_hns_enabled") { + const value = Buffer.from(pair.substring(eqIdx + 1), "base64").toString("utf8"); + metadata[key] = value; + } + } + } + } + + const etag = newEtag(); + + await this.metadataStore.setContainerMetadata( + createStorageContext(ctx.requestId), + account, + filesystem, + now, + etag, + Object.keys(metadata).length > 0 ? metadata : undefined + ); + + res.status(200); + res.setHeader("ETag", etag); + res.setHeader("Last-Modified", now.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + logger.error(`FilesystemHandler.setProperties error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + private extractMetadata(req: Request): { [key: string]: string } | undefined { + const metadata: { [key: string]: string } = {}; + let hasMetadata = false; + for (const [key, value] of Object.entries(req.headers)) { + if (key.toLowerCase().startsWith("x-ms-meta-") && value) { + const metaKey = key.substring("x-ms-meta-".length); + if (metaKey === "azurite_hns_enabled") continue; // reserved — block forgery + metadata[metaKey] = Array.isArray(value) ? value.join(",") : value; + hasMetadata = true; + } + } + return hasMetadata ? metadata : undefined; + } +} diff --git a/src/blob/dfs/handlers/PathHandler.ts b/src/blob/dfs/handlers/PathHandler.ts new file mode 100644 index 000000000..881f9743c --- /dev/null +++ b/src/blob/dfs/handlers/PathHandler.ts @@ -0,0 +1,1300 @@ +import { Request, Response } from "express"; + +import logger from "../../../common/Logger"; +import { OAuthLevel } from "../../../common/models"; +import IExtentStore from "../../../common/persistence/IExtentStore"; +import IBlobMetadataStore, { + BlobModel, + BlockModel +} from "../../persistence/IBlobMetadataStore"; +import { getDfsContext, IDfsContext } from "../DfsContext"; +import { + sendDfsError, + pathNotFound, + pathAlreadyExists, + filesystemNotFound, + directoryNotEmpty, + internalError, + invalidSourceOrDestination, + invalidFlushPosition +} from "../DfsErrorFactory"; +import { + EMULATOR_ACCOUNT_NAME, + BLOB_API_VERSION +} from "../../utils/constants"; +import { newEtag } from "../../../common/utils/utils"; +import * as Models from "../../generated/artifacts/models"; +import { createStorageContext } from "../DfsContextFactory"; +import { checkAcl, AclPermission } from "../DfsAclEnforcer"; +import { createHash } from "crypto"; + +const HNS_DIRECTORY_METADATA_KEY = "hdi_isfolder"; + +export default class PathHandler { + public constructor( + private readonly metadataStore: IBlobMetadataStore, + private readonly extentStore: IExtentStore, + private readonly oauth?: OAuthLevel + ) {} + + public async create(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + const resource = req.query.resource as string | undefined; + const isDirectory = resource === "directory"; + + const renameSource = req.headers["x-ms-rename-source"] as string | undefined; + if (renameSource) { + return this.renamePath(req, res); + } + + // ACL enforcement: require write on the parent directory (C-5) + const parentPath = pathName.includes("/") + ? pathName.substring(0, pathName.lastIndexOf("/")) + : ""; + if (!(await this.enforceAcl(ctx, res, account, filesystem, parentPath, "w"))) return; + + try { + const now = new Date(); + const metadata: { [key: string]: string } = {}; + if (isDirectory) { + metadata[HNS_DIRECTORY_METADATA_KEY] = "true"; + } + + // Azure returns 409 PathAlreadyExists when creating a directory that + // already exists. This is required for the SDK's CreateIfNotExistsAsync + // to correctly return null for existing directories. + if (isDirectory) { + const existing = await this.safeGetBlobProperties(account, filesystem, pathName, ctx.requestId); + if (existing && existing.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true") { + return sendDfsError(res, pathAlreadyExists(pathName)); + } + } + + // Ensure intermediate directories exist + if (pathName.includes("/")) { + await this.ensureIntermediateDirectories(account, filesystem, pathName, now, ctx.requestId); + } + + const blobModel: BlobModel = { + accountName: account, + containerName: filesystem, + name: pathName, + snapshot: "", + isCommitted: true, + properties: { + lastModified: now, + etag: newEtag(), + contentLength: 0, + contentType: isDirectory ? undefined : "application/octet-stream", + blobType: Models.BlobType.BlockBlob, + accessTier: Models.AccessTier.Hot, + accessTierInferred: true, + creationTime: now, + legalHold: false + }, + metadata: Object.keys(metadata).length > 0 ? metadata : undefined, + committedBlocksInOrder: [], + persistency: undefined as any + }; + + await this.metadataStore.createBlob(createStorageContext(ctx.requestId), blobModel); + + // Register in HNS hierarchy table (null = root, distinct from "" used for ACL) + const hnsParentPath = pathName.includes("/") + ? pathName.substring(0, pathName.lastIndexOf("/")) + : null; + await this.metadataStore.registerHnsPath( + createStorageContext(ctx.requestId), account, filesystem, + pathName, hnsParentPath, isDirectory + ); + + res.status(201); + res.setHeader("ETag", blobModel.properties.etag!); + res.setHeader("Last-Modified", now.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("Content-Length", "0"); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + if (error.statusCode === 409 || + error.code === "PathAlreadyExists" || + error.code === "BlobAlreadyExists" || + error.storageError?.storageErrorCode === "BlobAlreadyExists") { + return sendDfsError(res, pathAlreadyExists(pathName)); + } + logger.error(`PathHandler.create error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async delete(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + const recursive = req.query.recursive === "true"; + + // ACL enforcement + if (!(await this.enforceAcl(ctx, res, account, filesystem, pathName, "w"))) return; + + try { + // Check if it's a directory + const blobProps = await this.safeGetBlobProperties(account, filesystem, pathName, ctx.requestId); + if (!blobProps) { + return sendDfsError(res, pathNotFound(pathName)); + } + + const isDir = blobProps.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true"; + + if (isDir) { + // List ALL blobs under this directory prefix (recursive, no delimiter) + // to check for children. This catches blobs created via both DFS and + // Blob API, regardless of whether they're in the HNS hierarchy table. + const prefix = pathName + "/"; + const [allChildren] = await this.metadataStore.listBlobs( + createStorageContext(ctx.requestId), account, filesystem, + undefined, undefined, prefix + ); + + if (allChildren.length > 0 && !recursive) { + return sendDfsError(res, directoryNotEmpty(pathName)); + } + + if (recursive && allChildren.length > 0) { + // Delete descendant blobs with bounded concurrency; honour lease conditions + // so leased children are rejected rather than force-deleted. + const childLeaseConditions = this.extractLeaseConditions(req); + const BATCH = 16; + for (let i = 0; i < allChildren.length; i += BATCH) { + await Promise.all( + allChildren.slice(i, i + BATCH).map(child => + this.metadataStore.deleteBlob( + createStorageContext(ctx.requestId), account, filesystem, child.name, + childLeaseConditions ? { leaseAccessConditions: childLeaseConditions } : {} + ).catch((e: any) => { if (e.statusCode !== 404) throw e; }) + ) + ); + } + // Unregister all descendants from HNS hierarchy + await this.metadataStore.unregisterHnsPathsByPrefix( + createStorageContext(ctx.requestId), account, filesystem, prefix + ); + } + } + + const leaseConditions = this.extractLeaseConditions(req); + const modifiedConditions = this.extractModifiedAccessConditions(req); + await this.metadataStore.deleteBlob( + createStorageContext(ctx.requestId), account, filesystem, pathName, + { + leaseAccessConditions: leaseConditions, + modifiedAccessConditions: modifiedConditions + } + ); + + // Unregister from HNS hierarchy + await this.metadataStore.unregisterHnsPath( + createStorageContext(ctx.requestId), account, filesystem, pathName + ); + + res.status(200); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + if (error.statusCode === 412) { + return sendDfsError(res, { statusCode: 412, code: "ConditionNotMet", message: "The condition specified using HTTP conditional header(s) is not met." }); + } + logger.error(`PathHandler.delete error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async getProperties(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + const action = req.query.action as string | undefined; + + // ACL enforcement + if (!(await this.enforceAcl(ctx, res, account, filesystem, pathName, "r"))) return; + + try { + const leaseConditions = this.extractLeaseConditions(req); + const modifiedConditions = this.extractModifiedAccessConditions(req); + const result = await this.metadataStore.getBlobProperties( + createStorageContext(ctx.requestId), account, filesystem, pathName, + undefined, leaseConditions, modifiedConditions + ); + + const isDir = result.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true"; + + res.status(200); + res.setHeader("ETag", result.properties.etag!); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("x-ms-resource-type", isDir ? "directory" : "file"); + + if (result.metadata) { + const internalKeys = new Set(["dfsAclOwner", "dfsAclGroup", "dfsAclPermissions", "dfsAcl", HNS_DIRECTORY_METADATA_KEY]); + const properties = Object.entries(result.metadata) + .filter(([key]) => !internalKeys.has(key)) + .map(([key, value]) => `${key}=${Buffer.from(value as string).toString("base64")}`) + .join(","); + if (properties) { + res.setHeader("x-ms-properties", properties); + } + } + + if (!isDir) { + res.setHeader("Content-Length", String(result.properties.contentLength || 0)); + if (result.properties.contentType) { + res.setHeader("Content-Type", result.properties.contentType); + } + } else { + res.setHeader("Content-Length", "0"); + } + + // ACL headers + if (action === "getAccessControl") { + res.setHeader("x-ms-owner", (result.metadata as any)?.dfsAclOwner || "$superuser"); + res.setHeader("x-ms-group", (result.metadata as any)?.dfsAclGroup || "$superuser"); + res.setHeader("x-ms-permissions", (result.metadata as any)?.dfsAclPermissions || "rwxr-x---"); + if ((result.metadata as any)?.dfsAcl) { + res.setHeader("x-ms-acl", (result.metadata as any).dfsAcl); + } + } + + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + if (error.statusCode === 412) { + return sendDfsError(res, { statusCode: 412, code: "ConditionNotMet", message: "The condition specified using HTTP conditional header(s) is not met." }); + } + logger.error(`PathHandler.getProperties error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async read(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + // ACL enforcement + if (!(await this.enforceAcl(ctx, res, account, filesystem, pathName, "r"))) return; + + try { + const leaseConditions = this.extractLeaseConditions(req); + const modifiedConditions = this.extractModifiedAccessConditions(req); + const blob = await this.metadataStore.downloadBlob( + createStorageContext(ctx.requestId), account, filesystem, pathName, + undefined, leaseConditions, modifiedConditions + ); + + if (blob.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true") { + return sendDfsError(res, { statusCode: 400, code: "PathIsDirectory", + message: "The path is a directory, not a file." }); + } + + res.status(200); + res.setHeader("ETag", blob.properties.etag!); + res.setHeader("Last-Modified", blob.properties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("x-ms-resource-type", "file"); + res.setHeader("Content-Length", String(blob.properties.contentLength || 0)); + + if (blob.properties.contentType) { + res.setHeader("Content-Type", blob.properties.contentType); + } + + const hasCommittedBlocks = blob.committedBlocksInOrder && blob.committedBlocksInOrder.length > 0; + if (blob.properties.contentLength === 0 && !hasCommittedBlocks) { + return res.end(); + } + + // Read from extent store + if (hasCommittedBlocks) { + // Multi-block blob: read each block in order + for (const block of blob.committedBlocksInOrder!) { + const stream = await this.extentStore.readExtent(block.persistency); + await new Promise((resolve, reject) => { + stream.on("data", (chunk: Buffer) => res.write(chunk)); + stream.on("end", resolve); + stream.on("error", (err) => { (stream as any).destroy?.(); reject(err); }); + }); + } + res.end(); + } else if (blob.persistency) { + const stream = await this.extentStore.readExtent(blob.persistency); + await new Promise((resolve, reject) => { + stream.on("end", () => { res.end(); resolve(); }); + stream.on("error", reject); + stream.pipe(res, { end: false }); // manual end() above; avoid double-close + }); + } else { + res.end(); + } + } catch (error: any) { + if (res.headersSent) { + // Headers already sent — can't send a DFS error; destroy the connection + logger.error(`PathHandler.read error after headers sent: ${error.message}`, ctx.requestId); + res.destroy(error); + return; + } + if (error.statusCode === 304) { + res.status(304); + res.setHeader("x-ms-request-id", ctx.requestId); + res.end(); + return; + } + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + logger.error(`PathHandler.read error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async listPaths(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const directory = req.query.directory as string | undefined; + const recursive = req.query.recursive === "true"; + const maxResults = Math.max(1, Math.min(5000, parseInt(req.query.maxResults as string, 10) || 5000)); + const continuation = req.query.continuation as string | undefined; + + // ACL enforcement: require read on the target directory (or filesystem root) + if (!(await this.enforceAcl(ctx, res, account, filesystem, directory || "", "r"))) return; + + const prefix = directory ? (directory.endsWith("/") ? directory : directory + "/") : ""; + const delimiter = recursive ? undefined : "/"; + + try { + const [blobs, prefixes, nextMarker] = await this.metadataStore.listBlobs( + createStorageContext(ctx.requestId), account, filesystem, delimiter, undefined, + prefix, maxResults, continuation + ); + + // If a specific directory was requested and nothing was found, return 404 + if (directory && blobs.length === 0 && (!prefixes || prefixes.length === 0) && !continuation) { + const dirExists = await this.safeGetBlobProperties(account, filesystem, directory, ctx.requestId); + if (!dirExists) { + return sendDfsError(res, pathNotFound(directory)); + } + } + + const paths: any[] = []; + + for (const blob of blobs) { + // Skip the directory marker itself if it matches prefix exactly + if (blob.name === directory) continue; + + const isDir = blob.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true"; + paths.push({ + name: blob.name, + isDirectory: isDir || false, + lastModified: blob.properties.lastModified.toUTCString(), + eTag: blob.properties.etag, + contentLength: isDir ? 0 : (blob.properties.contentLength || 0), + owner: blob.metadata?.dfsAclOwner || "$superuser", + group: blob.metadata?.dfsAclGroup || "$superuser", + permissions: blob.metadata?.dfsAclPermissions || "rwxr-x---" + }); + } + + // Add prefixes as directories (for non-recursive listing) + if (prefixes) { + for (const p of prefixes) { + const dirName = p.name.endsWith("/") ? p.name.slice(0, -1) : p.name; + const dirProps = await this.safeGetBlobProperties(account, filesystem, dirName, ctx.requestId); + paths.push({ + name: dirName, + isDirectory: true, + lastModified: dirProps?.properties.lastModified.toUTCString() ?? new Date().toUTCString(), + eTag: dirProps?.properties.etag, + contentLength: 0, + owner: dirProps?.metadata?.dfsAclOwner || "$superuser", + group: dirProps?.metadata?.dfsAclGroup || "$superuser", + permissions: dirProps?.metadata?.dfsAclPermissions || "rwxr-x---" + }); + } + } + + res.status(200); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + if (nextMarker) { + res.setHeader("x-ms-continuation", nextMarker); + } + + res.json({ paths }); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, filesystemNotFound(filesystem)); + } + logger.error(`PathHandler.listPaths error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async update(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + // ACL enforcement for update operations + if (!(await this.enforceAcl(ctx, res, account, filesystem, pathName, "w"))) return; + + const action = req.query.action as string; + switch (action) { + case "append": + return this.appendData(req, res); + case "flush": + return this.flushData(req, res); + case "setAccessControl": + return this.setAccessControl(req, res); + case "setAccessControlRecursive": + return this.setAccessControlRecursive(req, res); + case "setProperties": + return this.setProperties(req, res); + default: + return sendDfsError(res, { + statusCode: 400, + code: "InvalidQueryParameterValue", + message: `Value for one of the query parameters specified in the request URI is invalid. QueryParameterName: action, QueryParameterValue: ${action}` + }); + } + } + + private async appendData(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + const positionParam = Array.isArray(req.query.position) + ? req.query.position[0] + : req.query.position; + const position = parseInt(String(positionParam || "0"), 10); + + try { + // Validate position matches the current expected next offset (contiguity enforcement). + // NOTE: this check is not atomic with stageBlock — two concurrent appends at the same + // position will both pass and the second will silently overwrite the first block (the + // first extent is then orphaned). This is a known limitation of the emulator. + const blobProps = await this.metadataStore.getBlobProperties( + createStorageContext(ctx.requestId), account, filesystem, pathName, undefined, undefined + ); + const blockList = await this.metadataStore.getBlockList( + createStorageContext(ctx.requestId), account, filesystem, pathName, + undefined, undefined, undefined, undefined + ); + const committedLength = blobProps.properties.contentLength ?? 0; + const uncommittedLength = (blockList.uncommittedBlocks ?? []).reduce((sum, b) => sum + (b.size ?? 0), 0); + const expectedPosition = committedLength + uncommittedLength; + if (position !== expectedPosition) { + return sendDfsError(res, { + statusCode: 409, + code: "ConditionNotMet", + message: `Append position ${position} does not match the expected offset ${expectedPosition}.` + }); + } + + const rawBody = Array.isArray(req.body) ? Buffer.from(req.body) : req.body; + const body = Buffer.isBuffer(rawBody) ? rawBody : Buffer.from(rawBody || ""); + + // Content-MD5 validation + const contentMD5 = req.headers["content-md5"] as string | undefined; + if (contentMD5) { + const computedMD5 = createHash("md5").update(body as any).digest("base64"); + if (computedMD5 !== contentMD5) { + return sendDfsError(res, { + statusCode: 400, + code: "Md5Mismatch", + message: "The MD5 value specified in the request did not match with the MD5 value calculated by the server." + }); + } + } + + if (body.length === 0) { + res.status(202); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + return res.end(); + } + + // Write to extent store + const extentChunk = await this.extentStore.appendExtent(body); + + // Stage as an uncommitted block (reusing block blob infrastructure) + const blockId = Buffer.from( + `dfs-${position.toString().padStart(20, "0")}` + ).toString("base64"); + + const block: BlockModel = { + accountName: account, + containerName: filesystem, + blobName: pathName, + isCommitted: false, + name: blockId, + size: body.length, + persistency: extentChunk + }; + + await this.metadataStore.stageBlock( + createStorageContext(ctx.requestId), block, undefined + ); + + res.status(202); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("x-ms-content-length", String(body.length)); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + logger.error(`PathHandler.appendData error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + private async flushData(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + const flushPositionParam = Array.isArray(req.query.position) + ? req.query.position[0] + : req.query.position; + const position = parseInt(String(flushPositionParam || "0"), 10); + + try { + // Get current blob to find uncommitted blocks + const blob = await this.metadataStore.downloadBlob( + createStorageContext(ctx.requestId), account, filesystem, pathName, undefined + ); + + // Get uncommitted blocks + const blockList = await this.metadataStore.getBlockList( + createStorageContext(ctx.requestId), account, filesystem, pathName, + undefined, undefined, undefined, undefined + ); + + if (!blockList.uncommittedBlocks || blockList.uncommittedBlocks.length === 0) { + // Nothing to flush — just update the blob + res.status(200); + res.setHeader("ETag", blob.properties.etag!); + res.setHeader("Last-Modified", blob.properties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + return res.end(); + } + + // Sort blocks by the byte offset encoded in the block ID ("dfs-") + const sortedBlocks = [...blockList.uncommittedBlocks].sort((a, b) => { + const decode = (name: string) => { + const raw = Buffer.from(name, "base64").toString("utf8"); + return raw.startsWith("dfs-") ? parseInt(raw.substring(4), 10) : 0; + }; + return decode(a.name) - decode(b.name); + }); + + // Validate position matches the actual data length (committed + staged) + const committedLength = blob.properties.contentLength ?? 0; + const stagedLength = sortedBlocks.reduce((sum, b) => sum + (b.size ?? 0), 0); + const impliedLength = committedLength + stagedLength; + if (position !== impliedLength) { + return sendDfsError(res, invalidFlushPosition(position, impliedLength)); + } + + // Include previously committed blocks so multi-cycle append→flush is correct + const previouslyCommitted = (blob.committedBlocksInOrder || []).map(b => ({ + blockName: b.name, + blockCommitType: "Committed" + })); + const commitList = [ + ...previouslyCommitted, + ...sortedBlocks.map(b => ({ blockName: b.name, blockCommitType: "Uncommitted" })) + ]; + + const now = new Date(); + const etag = newEtag(); + + const updatedBlob: BlobModel = { + ...blob, + properties: { + ...blob.properties, + lastModified: now, + etag, + contentLength: impliedLength, + contentType: blob.properties.contentType || "application/octet-stream" + } + }; + + await this.metadataStore.commitBlockList( + createStorageContext(ctx.requestId), updatedBlob, commitList + ); + + res.status(200); + res.setHeader("ETag", etag); + res.setHeader("Last-Modified", now.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("x-ms-resource-type", "file"); + res.setHeader("Content-Length", "0"); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + logger.error(`PathHandler.flushData error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + private async setAccessControl(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const result = await this.metadataStore.getBlobProperties( + createStorageContext(ctx.requestId), account, filesystem, pathName, undefined, undefined + ); + + // Store ACL info in metadata + const metadata = { ...(result.metadata || {}) }; + const owner = req.headers["x-ms-owner"] as string | undefined; + const group = req.headers["x-ms-group"] as string | undefined; + const permissions = req.headers["x-ms-permissions"] as string | undefined; + const acl = req.headers["x-ms-acl"] as string | undefined; + + if (owner) metadata["dfsAclOwner"] = owner; + if (group) metadata["dfsAclGroup"] = group; + if (permissions) metadata["dfsAclPermissions"] = permissions; + if (acl) metadata["dfsAcl"] = acl; + + const updatedProperties = await this.metadataStore.setBlobMetadata( + createStorageContext(ctx.requestId), account, filesystem, pathName, + undefined, metadata + ); + + res.status(200); + res.setHeader("ETag", updatedProperties.etag!); + res.setHeader("Last-Modified", updatedProperties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + logger.error(`PathHandler.setAccessControl error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + private async setAccessControlRecursive(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + const mode = req.query.mode as string || "set"; // set, modify, remove + const acl = req.headers["x-ms-acl"] as string | undefined; + const maxRecords = Math.max(1, Math.min(2000, parseInt(req.query.maxRecords as string, 10) || 2000)); + const continuation = req.query.continuation as string | undefined; + + if (mode !== "set" && mode !== "modify" && mode !== "remove") { + return sendDfsError(res, { statusCode: 400, code: "InvalidQueryParameterValue", + message: `Invalid value for query parameter 'mode': ${mode}. Must be 'set', 'modify', or 'remove'.` }); + } + + try { + const prefix = pathName.endsWith("/") ? pathName : pathName + "/"; + + const [blobs, , nextMarker] = await this.metadataStore.listBlobs( + createStorageContext(ctx.requestId), account, filesystem, + undefined, undefined, prefix, maxRecords, continuation + ); + + let directoriesSuccessful = 0; + let filesSuccessful = 0; + let failureCount = 0; + + // Include root only on first page; subsequent pages should not re-process it + const allPaths = continuation + ? blobs.map(b => b.name) + : [pathName, ...blobs.map(b => b.name)]; + + for (const blobPath of allPaths) { + try { + const props = await this.metadataStore.getBlobProperties( + createStorageContext(ctx.requestId), account, filesystem, + blobPath, undefined, undefined + ); + + const metadata = { ...(props.metadata || {}) }; + const isDir = metadata[HNS_DIRECTORY_METADATA_KEY] === "true"; + + if (acl) { + if (mode === "set") { + metadata["dfsAcl"] = acl; + } else if (mode === "modify") { + // Merge: new ACL entries override existing ones with same qualifier + const existing = (metadata["dfsAcl"] || "").split(",").filter(Boolean); + const incoming = acl.split(","); + const merged = new Map(); + for (const entry of existing) { + const key = entry.split(":").slice(0, 2).join(":"); + merged.set(key, entry); + } + for (const entry of incoming) { + const key = entry.split(":").slice(0, 2).join(":"); + merged.set(key, entry); + } + metadata["dfsAcl"] = Array.from(merged.values()).join(","); + } else if (mode === "remove") { + // Remove specified ACL entries + const existing = (metadata["dfsAcl"] || "").split(",").filter(Boolean); + const toRemove = new Set(acl.split(",").map((e: string) => e.split(":").slice(0, 2).join(":"))); + metadata["dfsAcl"] = existing + .filter((e: string) => !toRemove.has(e.split(":").slice(0, 2).join(":"))) + .join(","); + } + } + + await this.metadataStore.setBlobMetadata( + createStorageContext(ctx.requestId), account, filesystem, + blobPath, undefined, metadata + ); + + if (isDir) { + directoriesSuccessful++; + } else { + filesSuccessful++; + } + } catch { + failureCount++; + } + } + + res.status(200); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + if (nextMarker) { + res.setHeader("x-ms-continuation", nextMarker); + } + + res.json({ + directoriesSuccessful, + filesSuccessful, + failureCount + }); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + logger.error(`PathHandler.setAccessControlRecursive error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + private async setProperties(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const result = await this.metadataStore.getBlobProperties( + createStorageContext(ctx.requestId), account, filesystem, pathName, undefined, undefined + ); + + const metadata = { ...(result.metadata || {}) }; + + // Parse x-ms-properties header (base64 encoded key=value pairs); block reserved keys + const reservedKeys = new Set(["hdi_isfolder", "dfsAclOwner", "dfsAclGroup", "dfsAclPermissions", "dfsAcl"]); + const propertiesHeader = req.headers["x-ms-properties"] as string | undefined; + if (propertiesHeader) { + const pairs = propertiesHeader.split(","); + for (const pair of pairs) { + const eqIdx = pair.indexOf("="); + if (eqIdx >= 0) { + const key = pair.substring(0, eqIdx); + if (!reservedKeys.has(key)) { + const value = Buffer.from(pair.substring(eqIdx + 1), "base64").toString("utf8"); + metadata[key] = value; + } + } + } + } + + const updatedProperties = await this.metadataStore.setBlobMetadata( + createStorageContext(ctx.requestId), account, filesystem, pathName, + undefined, metadata + ); + + res.status(200); + res.setHeader("ETag", updatedProperties.etag!); + res.setHeader("Last-Modified", updatedProperties.lastModified.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + logger.error(`PathHandler.setProperties error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + public async lease(req: Request, res: Response): Promise { + const leaseAction = (req.headers["x-ms-lease-action"] as string || "").toLowerCase(); + switch (leaseAction) { + case "acquire": + return this.acquireLease(req, res); + case "release": + return this.releaseLease(req, res); + case "renew": + return this.renewLease(req, res); + case "break": + return this.breakLease(req, res); + case "change": + return this.changeLease(req, res); + default: + return sendDfsError(res, { + statusCode: 400, + code: "InvalidHeaderValue", + message: `The value for one of the HTTP headers is not in the correct format. Header: x-ms-lease-action, Value: ${leaseAction}` + }); + } + } + + private async acquireLease(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const duration = parseInt(req.headers["x-ms-lease-duration"] as string || "-1", 10); + const proposedLeaseId = req.headers["x-ms-proposed-lease-id"] as string | undefined; + const modifiedConditions = this.extractModifiedAccessConditions(req); + + const result = await this.metadataStore.acquireBlobLease( + createStorageContext(ctx.requestId), + account, filesystem, pathName, duration, proposedLeaseId, + { modifiedAccessConditions: modifiedConditions } + ); + + res.status(201); + res.setHeader("ETag", result.properties.etag!); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + res.setHeader("x-ms-lease-id", result.leaseId!); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + this.handleLeaseError(res, error, ctx.requestId, pathName); + } + } + + private async releaseLease(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const leaseId = req.headers["x-ms-lease-id"] as string; + const modifiedConditions = this.extractModifiedAccessConditions(req); + + await this.metadataStore.releaseBlobLease( + createStorageContext(ctx.requestId), + account, filesystem, pathName, leaseId, + { modifiedAccessConditions: modifiedConditions } + ); + + res.status(200); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + this.handleLeaseError(res, error, ctx.requestId, pathName); + } + } + + private async renewLease(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const leaseId = req.headers["x-ms-lease-id"] as string; + const modifiedConditions = this.extractModifiedAccessConditions(req); + + const result = await this.metadataStore.renewBlobLease( + createStorageContext(ctx.requestId), + account, filesystem, pathName, leaseId, + { modifiedAccessConditions: modifiedConditions } + ); + + res.status(200); + res.setHeader("ETag", result.properties.etag!); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + res.setHeader("x-ms-lease-id", result.leaseId!); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + this.handleLeaseError(res, error, ctx.requestId, pathName); + } + } + + private async breakLease(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const rawBreakPeriod = req.headers["x-ms-lease-break-period"] as string | undefined; + const breakPeriod = rawBreakPeriod !== undefined ? parseInt(rawBreakPeriod, 10) : undefined; + if (breakPeriod !== undefined && isNaN(breakPeriod)) { + return sendDfsError(res, { statusCode: 400, code: "InvalidHeaderValue", + message: "x-ms-lease-break-period must be a non-negative integer." }); + } + const modifiedConditions = this.extractModifiedAccessConditions(req); + + const result = await this.metadataStore.breakBlobLease( + createStorageContext(ctx.requestId), + account, filesystem, pathName, breakPeriod, + { modifiedAccessConditions: modifiedConditions } + ); + + res.status(202); + res.setHeader("ETag", result.properties.etag!); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + if (result.leaseTime !== undefined) { + res.setHeader("x-ms-lease-time", String(result.leaseTime)); + } + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + this.handleLeaseError(res, error, ctx.requestId, pathName); + } + } + + private async changeLease(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const filesystem = ctx.filesystem!; + const pathName = ctx.path!; + + try { + const leaseId = req.headers["x-ms-lease-id"] as string; + const proposedLeaseId = req.headers["x-ms-proposed-lease-id"] as string; + const modifiedConditions = this.extractModifiedAccessConditions(req); + + const result = await this.metadataStore.changeBlobLease( + createStorageContext(ctx.requestId), + account, filesystem, pathName, leaseId, proposedLeaseId, + { modifiedAccessConditions: modifiedConditions } + ); + + res.status(200); + res.setHeader("ETag", result.properties.etag!); + res.setHeader("Last-Modified", result.properties.lastModified.toUTCString()); + res.setHeader("x-ms-lease-id", result.leaseId!); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.end(); + } catch (error: any) { + this.handleLeaseError(res, error, ctx.requestId, pathName); + } + } + + private handleLeaseError(res: Response, error: any, requestId: string, pathName: string): void { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(pathName)); + } + if (error.statusCode === 409 || error.statusCode === 412) { + return sendDfsError(res, { + statusCode: error.statusCode, + code: error.storageErrorCode || error.code || "LeaseOperationFailed", + message: error.storageErrorMessage || error.message + }); + } + logger.error(`PathHandler.lease error: ${error.message}`, requestId); + sendDfsError(res, internalError(error.message)); + } + + private async renamePath(req: Request, res: Response): Promise { + const ctx = getDfsContext(res); + const account = ctx.account || EMULATOR_ACCOUNT_NAME; + const destFilesystem = ctx.filesystem!; + const destPath = ctx.path!; + const renameSource = req.headers["x-ms-rename-source"] as string; + + let sourceFilesystem: string | undefined; + let sourcePath: string | undefined; + + try { + // Parse rename source: /{filesystem}/{path}?sastoken + const sourceUrl = new URL(renameSource, "http://localhost"); + const sourceParts = sourceUrl.pathname.split("/").filter(p => p).map(decodeURIComponent); + if (sourceParts.some(p => p === "..")) { + return sendDfsError(res, invalidSourceOrDestination("Rename source path must not contain '..' segments.")); + } + + // Handle both /{account}/{filesystem}/{path} and /{filesystem}/{path} + if (sourceParts.length >= 3 && sourceParts[0] === account) { + sourceFilesystem = sourceParts[1]; + sourcePath = sourceParts.slice(2).join("/"); + } else if (sourceParts.length >= 2) { + sourceFilesystem = sourceParts[0]; + sourcePath = sourceParts.slice(1).join("/"); + } else { + return sendDfsError(res, invalidSourceOrDestination( + `Invalid rename source: ${renameSource}` + )); + } + + // Get source blob to check if it exists and whether it's a directory + const sourceBlob = await this.safeGetBlobProperties(account, sourceFilesystem, sourcePath!, ctx.requestId); + if (!sourceBlob) { + return sendDfsError(res, pathNotFound(sourcePath)); + } + + // ACL enforcement: write on source (moving away), write on destination + if (!(await this.enforceAcl(ctx, res, account, sourceFilesystem, sourcePath!, "w"))) return; + if (!(await this.enforceAcl(ctx, res, account, destFilesystem, destPath, "w"))) return; + + const isDir = sourceBlob.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true"; + + // Azure overwrite semantics: if destination exists, overwrite files and empty + // directories; reject rename onto a non-empty directory (M-1) + const destBlob = await this.safeGetBlobProperties(account, destFilesystem, destPath, ctx.requestId); + if (destBlob) { + const destIsDir = destBlob.metadata?.[HNS_DIRECTORY_METADATA_KEY] === "true"; + if (destIsDir) { + // Check if the destination directory is empty + const destPrefix = destPath + "/"; + const [destChildren] = await this.metadataStore.listBlobs( + createStorageContext(ctx.requestId), account, destFilesystem, undefined, undefined, destPrefix, 1 + ); + if (destChildren.length > 0) { + return sendDfsError(res, { statusCode: 409, code: "DirectoryNotEmpty", message: "The directory is not empty." }); + } + } + // Delete the destination blob (file or empty directory) before renaming. + // NOTE: the delete and rename are not a single atomic transaction — a concurrent + // create at destPath between these two steps will cause a constraint violation. + // This is a known emulator limitation. + await this.metadataStore.deleteBlob( + createStorageContext(ctx.requestId), account, destFilesystem, destPath, {} + ); + await this.metadataStore.unregisterHnsPath( + createStorageContext(ctx.requestId), account, destFilesystem, destPath + ); + } + + const now = new Date(); + + // Create intermediate directories before the atomic rename so hierarchy is consistent + if (destPath.includes("/")) { + await this.ensureIntermediateDirectories(account, destFilesystem, destPath, now, ctx.requestId); + } + + const result = await this.metadataStore.renamePathAtomic( + createStorageContext(ctx.requestId), + account, + sourceFilesystem, + sourcePath, + destFilesystem, + destPath, + isDir + ); + + res.status(201); + res.setHeader("ETag", result.etag!); + res.setHeader("Last-Modified", result.lastModified!.toUTCString()); + res.setHeader("x-ms-request-id", ctx.requestId); + res.setHeader("x-ms-version", BLOB_API_VERSION); + res.setHeader("Content-Length", "0"); + res.end(); + } catch (error: any) { + if (error.statusCode === 404) { + return sendDfsError(res, pathNotFound(sourcePath ?? destPath)); + } + logger.error(`PathHandler.renamePath error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError(error.message)); + } + } + + private async ensureIntermediateDirectories( + account: string, + filesystem: string, + pathName: string, + now: Date, + requestId: string + ): Promise { + const parts = pathName.split("/"); + // Skip the last part (the file/dir being created) + for (let i = 1; i < parts.length; i++) { + const dirPath = parts.slice(0, i).join("/"); + const existing = await this.safeGetBlobProperties(account, filesystem, dirPath, requestId); + if (existing && existing.metadata?.[HNS_DIRECTORY_METADATA_KEY] !== "true") { + // A file exists at this path — cannot use it as a parent directory + throw Object.assign(new Error(`PathConflict: "${dirPath}" is a file, not a directory`), { statusCode: 409, code: "PathAlreadyExists" }); + } + if (!existing) { + const dirBlob: BlobModel = { + accountName: account, + containerName: filesystem, + name: dirPath, + snapshot: "", + isCommitted: true, + properties: { + lastModified: now, + etag: newEtag(), + contentLength: 0, + blobType: Models.BlobType.BlockBlob, + accessTier: Models.AccessTier.Hot, + accessTierInferred: true, + creationTime: now, + legalHold: false + }, + metadata: { [HNS_DIRECTORY_METADATA_KEY]: "true" }, + committedBlocksInOrder: [], + persistency: undefined as any + }; + try { + await this.metadataStore.createBlob(createStorageContext(requestId), dirBlob); + // Register intermediate directory in HNS hierarchy + const parentDir = i > 1 ? parts.slice(0, i - 1).join("/") : null; + await this.metadataStore.registerHnsPath( + createStorageContext(requestId), account, filesystem, + dirPath, parentDir, true + ); + } catch { + // Ignore if already exists (race condition) + } + } + } + } + + /** + * Enforce ACL on a path operation when --oauth acl is enabled. + * Returns true if allowed, sends error response and returns false if denied. + */ + private async enforceAcl( + ctx: IDfsContext, + res: Response, + account: string, + filesystem: string, + pathName: string, + requiredPermission: AclPermission + ): Promise { + if (this.oauth !== OAuthLevel.ACL || !ctx.identity) { + return true; // ACL enforcement not active + } + + try { + const blobProps = await this.safeGetBlobProperties(account, filesystem, pathName, ctx.requestId); + if (!blobProps) { + return true; // Path doesn't exist yet (create) — allow + } + + const owner = blobProps.metadata?.dfsAclOwner; + const group = blobProps.metadata?.dfsAclGroup; + const permissions = blobProps.metadata?.dfsAclPermissions; + const acl = blobProps.metadata?.dfsAcl; + + const result = checkAcl(ctx.identity, owner, group, permissions, acl, requiredPermission); + + if (!result.allowed) { + logger.info( + `PathHandler ACL denied: ${result.reason} (path=${pathName}, perm=${requiredPermission})`, + ctx.requestId + ); + sendDfsError(res, { + statusCode: 403, + code: "AuthorizationPermissionMismatch", + message: `This request is not authorized to perform this operation using this permission. Required: ${requiredPermission}` + }); + return false; + } + + return true; + } catch (error: any) { + logger.error(`PathHandler.enforceAcl error: ${error.message}`, ctx.requestId); + sendDfsError(res, internalError("ACL evaluation failed.")); + return false; + } + } + + private extractLeaseConditions(req: Request): Models.LeaseAccessConditions | undefined { + const leaseId = req.headers["x-ms-lease-id"] as string | undefined; + if (leaseId) { + return { leaseId }; + } + return undefined; + } + + private extractModifiedAccessConditions(req: Request): Models.ModifiedAccessConditions | undefined { + const ifMatch = req.headers["if-match"] as string | undefined; + const ifNoneMatch = req.headers["if-none-match"] as string | undefined; + const ifModifiedSince = req.headers["if-modified-since"] as string | undefined; + const ifUnmodifiedSince = req.headers["if-unmodified-since"] as string | undefined; + + if (!ifMatch && !ifNoneMatch && !ifModifiedSince && !ifUnmodifiedSince) { + return undefined; + } + + return { + ifMatch, + ifNoneMatch, + ifModifiedSince: ifModifiedSince ? new Date(ifModifiedSince) : undefined, + ifUnmodifiedSince: ifUnmodifiedSince ? new Date(ifUnmodifiedSince) : undefined + }; + } + + private async safeGetBlobProperties( + account: string, + filesystem: string, + pathName: string, + requestId: string + ): Promise { + try { + return await this.metadataStore.getBlobProperties( + createStorageContext(requestId), account, filesystem, pathName, undefined, undefined + ); + } catch (error: any) { + if (error.statusCode === 404) return undefined; + throw error; // rethrow real errors — do not mask as "not found" + } + } +} diff --git a/src/blob/generated-dfs/Context.ts b/src/blob/generated-dfs/Context.ts new file mode 100644 index 000000000..76cef2f28 --- /dev/null +++ b/src/blob/generated-dfs/Context.ts @@ -0,0 +1,42 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * DFS Context — wraps the IDfsContext from the DFS middleware layer + * and provides a typed interface compatible with the generated handler pattern. + */ + +import { DfsOperation } from "./artifacts/operation"; + +export interface IDfsRequestContext { + requestId: string; + startTime: Date; + account?: string; + filesystem?: string; + path?: string; + isSecondary?: boolean; + operation?: DfsOperation; + authenticationPath?: string; +} + +/** + * Context object passed to generated DFS handler methods. + * Mirrors the pattern of src/blob/generated/Context.ts but tailored for DFS. + */ +export default class Context { + public readonly contextId: string; + public readonly startTime: Date; + public readonly account: string | undefined; + public readonly filesystem: string | undefined; + public readonly path: string | undefined; + public operation: DfsOperation | undefined; + + public constructor(dfsContext: IDfsRequestContext) { + this.contextId = dfsContext.requestId; + this.startTime = dfsContext.startTime; + this.account = dfsContext.account; + this.filesystem = dfsContext.filesystem; + this.path = dfsContext.path; + this.operation = dfsContext.operation; + } +} diff --git a/src/blob/generated-dfs/artifacts/models.ts b/src/blob/generated-dfs/artifacts/models.ts new file mode 100644 index 000000000..28cbbacf7 --- /dev/null +++ b/src/blob/generated-dfs/artifacts/models.ts @@ -0,0 +1,287 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +// --------------------------------------------------------------------------- +// Common models +// --------------------------------------------------------------------------- + +export interface ModifiedAccessConditions { + ifModifiedSince?: Date; + ifUnmodifiedSince?: Date; + ifMatch?: string; + ifNoneMatch?: string; +} + +export interface SourceModifiedAccessConditions { + sourceIfMatch?: string; + sourceIfNoneMatch?: string; + sourceIfModifiedSince?: Date; + sourceIfUnmodifiedSince?: Date; +} + +export interface LeaseAccessConditions { + leaseId?: string; +} + +// --------------------------------------------------------------------------- +// Filesystem models +// --------------------------------------------------------------------------- + +export interface FilesystemItem { + name: string; + lastModified: string; + eTag: string; +} + +export interface FilesystemListResponse { + filesystems?: FilesystemItem[]; +} + +export interface FilesystemListOptionalParams { + prefix?: string; + continuation?: string; + maxResults?: number; +} + +export interface FilesystemCreateOptionalParams { + properties?: string; +} + +export interface FilesystemCreateResponse { + statusCode: 201; + eTag?: string; + lastModified?: Date; + namespaceEnabled?: string; + requestId?: string; + version?: string; +} + +export interface FilesystemDeleteOptionalParams { + modifiedAccessConditions?: ModifiedAccessConditions; +} + +export interface FilesystemDeleteResponse { + statusCode: 202; + requestId?: string; + version?: string; +} + +export interface FilesystemGetPropertiesOptionalParams { + // No optional params beyond standard headers +} + +export interface FilesystemGetPropertiesResponse { + statusCode: 200; + eTag?: string; + lastModified?: Date; + properties?: string; + namespaceEnabled?: string; + requestId?: string; + version?: string; +} + +export interface FilesystemSetPropertiesOptionalParams { + properties?: string; + modifiedAccessConditions?: ModifiedAccessConditions; +} + +export interface FilesystemSetPropertiesResponse { + statusCode: 200; + eTag?: string; + lastModified?: Date; + requestId?: string; + version?: string; +} + +export interface FilesystemListPathsOptionalParams { + directory?: string; + recursive?: boolean; + continuation?: string; + maxResults?: number; + upn?: boolean; +} + +// --------------------------------------------------------------------------- +// Path models +// --------------------------------------------------------------------------- + +export type PathResourceType = "file" | "directory"; + +export interface PathItem { + name: string; + isDirectory?: boolean; + lastModified: string; + eTag?: string; + contentLength?: number; + owner?: string; + group?: string; + permissions?: string; +} + +export interface PathListResponse { + paths?: PathItem[]; +} + +export interface PathCreateOptionalParams { + resource?: PathResourceType; + continuation?: string; + renameSource?: string; + properties?: string; + permissions?: string; + umask?: string; + leaseAccessConditions?: LeaseAccessConditions; + modifiedAccessConditions?: ModifiedAccessConditions; + sourceModifiedAccessConditions?: SourceModifiedAccessConditions; +} + +export interface PathCreateResponse { + statusCode: 201; + eTag?: string; + lastModified?: Date; + continuation?: string; + contentLength?: number; + requestId?: string; + version?: string; +} + +export interface PathDeleteOptionalParams { + recursive?: boolean; + continuation?: string; + leaseAccessConditions?: LeaseAccessConditions; + modifiedAccessConditions?: ModifiedAccessConditions; +} + +export interface PathDeleteResponse { + statusCode: 200; + continuation?: string; + requestId?: string; + version?: string; +} + +export interface PathGetPropertiesOptionalParams { + action?: "getAccessControl" | "getStatus"; + upn?: boolean; + leaseAccessConditions?: LeaseAccessConditions; + modifiedAccessConditions?: ModifiedAccessConditions; +} + +export interface PathGetPropertiesResponse { + statusCode: 200; + eTag?: string; + lastModified?: Date; + resourceType?: string; + properties?: string; + owner?: string; + group?: string; + permissions?: string; + acl?: string; + leaseDuration?: string; + leaseState?: string; + leaseStatus?: string; + contentLength?: number; + contentType?: string; + requestId?: string; + version?: string; +} + +export interface PathReadOptionalParams { + range?: string; + leaseAccessConditions?: LeaseAccessConditions; + modifiedAccessConditions?: ModifiedAccessConditions; +} + +export interface PathReadResponse { + statusCode: 200; + body?: NodeJS.ReadableStream; + acceptRanges?: string; + contentRange?: string; + eTag?: string; + lastModified?: Date; + resourceType?: string; + properties?: string; + leaseDuration?: string; + leaseState?: string; + leaseStatus?: string; + contentLength?: number; + contentType?: string; + requestId?: string; + version?: string; +} + +export type PathUpdateAction = + | "append" + | "flush" + | "setAccessControl" + | "setAccessControlRecursive" + | "setProperties"; + +export type AclMode = "set" | "modify" | "remove"; + +export interface PathUpdateOptionalParams { + action: PathUpdateAction; + mode?: AclMode; + position?: number; + retainUncommittedData?: boolean; + close?: boolean; + contentLength?: number; + contentMD5?: string; + properties?: string; + owner?: string; + group?: string; + permissions?: string; + acl?: string; + leaseAccessConditions?: LeaseAccessConditions; + modifiedAccessConditions?: ModifiedAccessConditions; + maxRecords?: number; + continuation?: string; +} + +export interface PathUpdateResponse { + statusCode: 200 | 202; + eTag?: string; + lastModified?: Date; + contentLength?: number; + continuation?: string; + requestId?: string; + version?: string; + // For setAccessControlRecursive + directoriesSuccessful?: number; + filesSuccessful?: number; + failureCount?: number; +} + +export type LeaseAction = "acquire" | "release" | "renew" | "break" | "change"; + +export interface PathLeaseOptionalParams { + leaseAction: LeaseAction; + leaseDuration?: number; + leaseBreakPeriod?: number; + leaseId?: string; + proposedLeaseId?: string; + modifiedAccessConditions?: ModifiedAccessConditions; +} + +export interface PathLeaseResponse { + statusCode: 200 | 201 | 202; + eTag?: string; + lastModified?: Date; + leaseId?: string; + leaseTime?: number; + requestId?: string; + version?: string; +} + +// --------------------------------------------------------------------------- +// Error model +// --------------------------------------------------------------------------- + +export interface StorageError { + statusCode: number; + code: string; + message: string; +} diff --git a/src/blob/generated-dfs/artifacts/operation.ts b/src/blob/generated-dfs/artifacts/operation.ts new file mode 100644 index 000000000..3be500c71 --- /dev/null +++ b/src/blob/generated-dfs/artifacts/operation.ts @@ -0,0 +1,23 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +export enum DfsOperation { + Filesystem_Create, + Filesystem_Delete, + Filesystem_GetProperties, + Filesystem_SetProperties, + Filesystem_List, + Filesystem_ListPaths, + Path_Create, + Path_Delete, + Path_GetProperties, + Path_Read, + Path_Update, + Path_Lease, +} +export default DfsOperation; diff --git a/src/blob/generated-dfs/artifacts/specifications.ts b/src/blob/generated-dfs/artifacts/specifications.ts new file mode 100644 index 000000000..10cb0e182 --- /dev/null +++ b/src/blob/generated-dfs/artifacts/specifications.ts @@ -0,0 +1,104 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +import { DfsOperation } from "./operation"; + +/** + * Specification for matching an incoming HTTP request to a DFS operation. + * Used by the dispatch middleware to route requests. + */ +export interface IDfsOperationSpec { + httpMethod: string; + /** If true, the request must have a filesystem but no path in the URL. */ + filesystemOnly?: boolean; + /** If true, the request must have both filesystem and path in the URL. */ + requiresPath?: boolean; + /** Query parameters that must be present with specific values. */ + queryConditions?: { [key: string]: string | string[] | true }; + /** Headers that must be present (value = true means any value). */ + headerConditions?: { [key: string]: string | string[] | true }; +} + +/** + * Dispatch specifications for all DFS operations. + * The dispatch middleware iterates these and selects the best match. + */ +const Specifications: { [key: number]: IDfsOperationSpec } = {}; + +// ---- Filesystem operations ---- + +Specifications[DfsOperation.Filesystem_List] = { + httpMethod: "GET", + filesystemOnly: false, + queryConditions: { resource: "account" } +}; + +Specifications[DfsOperation.Filesystem_Create] = { + httpMethod: "PUT", + filesystemOnly: true, + queryConditions: { resource: "filesystem" } +}; + +Specifications[DfsOperation.Filesystem_Delete] = { + httpMethod: "DELETE", + filesystemOnly: true, + queryConditions: { resource: "filesystem" } +}; + +Specifications[DfsOperation.Filesystem_GetProperties] = { + httpMethod: "HEAD", + filesystemOnly: true, + queryConditions: { resource: "filesystem" } +}; + +Specifications[DfsOperation.Filesystem_SetProperties] = { + httpMethod: "PATCH", + filesystemOnly: true, + queryConditions: { resource: "filesystem" } +}; + +Specifications[DfsOperation.Filesystem_ListPaths] = { + httpMethod: "GET", + filesystemOnly: true, + queryConditions: { resource: "filesystem" } +}; + +// ---- Path operations ---- + +Specifications[DfsOperation.Path_Create] = { + httpMethod: "PUT", + requiresPath: true +}; + +Specifications[DfsOperation.Path_Delete] = { + httpMethod: "DELETE", + requiresPath: true +}; + +Specifications[DfsOperation.Path_GetProperties] = { + httpMethod: "HEAD", + requiresPath: true +}; + +Specifications[DfsOperation.Path_Read] = { + httpMethod: "GET", + requiresPath: true +}; + +Specifications[DfsOperation.Path_Update] = { + httpMethod: "PATCH", + requiresPath: true +}; + +Specifications[DfsOperation.Path_Lease] = { + httpMethod: "POST", + requiresPath: true, + headerConditions: { "x-ms-lease-action": true } +}; + +export default Specifications; diff --git a/src/blob/generated-dfs/handlers/IFilesystemHandler.ts b/src/blob/generated-dfs/handlers/IFilesystemHandler.ts new file mode 100644 index 000000000..4e03be246 --- /dev/null +++ b/src/blob/generated-dfs/handlers/IFilesystemHandler.ts @@ -0,0 +1,42 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +import * as DfsModels from "../artifacts/models"; +import Context from "../Context"; + +export default interface IFilesystemHandler { + create( + options: DfsModels.FilesystemCreateOptionalParams, + context: Context + ): Promise; + + delete( + options: DfsModels.FilesystemDeleteOptionalParams, + context: Context + ): Promise; + + getProperties( + options: DfsModels.FilesystemGetPropertiesOptionalParams, + context: Context + ): Promise; + + setProperties( + options: DfsModels.FilesystemSetPropertiesOptionalParams, + context: Context + ): Promise; + + list( + options: DfsModels.FilesystemListOptionalParams, + context: Context + ): Promise; + + listPaths( + options: DfsModels.FilesystemListPathsOptionalParams, + context: Context + ): Promise; +} diff --git a/src/blob/generated-dfs/handlers/IHandlers.ts b/src/blob/generated-dfs/handlers/IHandlers.ts new file mode 100644 index 000000000..6baf3a671 --- /dev/null +++ b/src/blob/generated-dfs/handlers/IHandlers.ts @@ -0,0 +1,16 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +import IFilesystemHandler from "./IFilesystemHandler"; +import IPathHandler from "./IPathHandler"; + +export interface IDfsHandlers { + filesystemHandler: IFilesystemHandler; + pathHandler: IPathHandler; +} +export default IDfsHandlers; diff --git a/src/blob/generated-dfs/handlers/IPathHandler.ts b/src/blob/generated-dfs/handlers/IPathHandler.ts new file mode 100644 index 000000000..653b9efb5 --- /dev/null +++ b/src/blob/generated-dfs/handlers/IPathHandler.ts @@ -0,0 +1,44 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +import * as DfsModels from "../artifacts/models"; +import Context from "../Context"; + +export default interface IPathHandler { + create( + body: NodeJS.ReadableStream | undefined, + options: DfsModels.PathCreateOptionalParams, + context: Context + ): Promise; + + delete( + options: DfsModels.PathDeleteOptionalParams, + context: Context + ): Promise; + + getProperties( + options: DfsModels.PathGetPropertiesOptionalParams, + context: Context + ): Promise; + + read( + options: DfsModels.PathReadOptionalParams, + context: Context + ): Promise; + + update( + body: NodeJS.ReadableStream | undefined, + options: DfsModels.PathUpdateOptionalParams, + context: Context + ): Promise; + + lease( + options: DfsModels.PathLeaseOptionalParams, + context: Context + ): Promise; +} diff --git a/src/blob/generated-dfs/handlers/handlerMappers.ts b/src/blob/generated-dfs/handlers/handlerMappers.ts new file mode 100644 index 000000000..13783c249 --- /dev/null +++ b/src/blob/generated-dfs/handlers/handlerMappers.ts @@ -0,0 +1,86 @@ +/* + * Copyright (c) Microsoft Corporation. All rights reserved. + * Licensed under the MIT License. See License.txt in the project root for license information. + * + * Code generated by Microsoft (R) AutoRest Code Generator. + * Changes may cause incorrect behavior and will be lost if the code is regenerated. + */ + +import { DfsOperation } from "../artifacts/operation"; + +export interface IHandlerPath { + handler: string; + method: string; + arguments: string[]; +} + +const operationHandlerMapping: { [key: number]: IHandlerPath } = {}; + +operationHandlerMapping[DfsOperation.Filesystem_Create] = { + arguments: ["options"], + handler: "filesystemHandler", + method: "create" +}; +operationHandlerMapping[DfsOperation.Filesystem_Delete] = { + arguments: ["options"], + handler: "filesystemHandler", + method: "delete" +}; +operationHandlerMapping[DfsOperation.Filesystem_GetProperties] = { + arguments: ["options"], + handler: "filesystemHandler", + method: "getProperties" +}; +operationHandlerMapping[DfsOperation.Filesystem_SetProperties] = { + arguments: ["options"], + handler: "filesystemHandler", + method: "setProperties" +}; +operationHandlerMapping[DfsOperation.Filesystem_List] = { + arguments: ["options"], + handler: "filesystemHandler", + method: "list" +}; +operationHandlerMapping[DfsOperation.Filesystem_ListPaths] = { + arguments: ["options"], + handler: "filesystemHandler", + method: "listPaths" +}; +operationHandlerMapping[DfsOperation.Path_Create] = { + arguments: ["body", "options"], + handler: "pathHandler", + method: "create" +}; +operationHandlerMapping[DfsOperation.Path_Delete] = { + arguments: ["options"], + handler: "pathHandler", + method: "delete" +}; +operationHandlerMapping[DfsOperation.Path_GetProperties] = { + arguments: ["options"], + handler: "pathHandler", + method: "getProperties" +}; +operationHandlerMapping[DfsOperation.Path_Read] = { + arguments: ["options"], + handler: "pathHandler", + method: "read" +}; +operationHandlerMapping[DfsOperation.Path_Update] = { + arguments: ["body", "options"], + handler: "pathHandler", + method: "update" +}; +operationHandlerMapping[DfsOperation.Path_Lease] = { + arguments: ["options"], + handler: "pathHandler", + method: "lease" +}; + +export function getHandlerByOperation( + operation: DfsOperation +): IHandlerPath | undefined { + return operationHandlerMapping[operation]; +} + +export default operationHandlerMapping; diff --git a/src/blob/generated/artifacts/models.ts b/src/blob/generated/artifacts/models.ts index e7f65d7a3..d5e275559 100644 --- a/src/blob/generated/artifacts/models.ts +++ b/src/blob/generated/artifacts/models.ts @@ -4318,6 +4318,10 @@ export interface ContainerGetAccountInfoHeaders { * 'FileStorage', 'BlockBlobStorage' */ accountKind?: AccountKind; + /** + * Version 2019-07-07 and newer. Indicates if the account has a hierarchical namespace enabled. + */ + isHierarchicalNamespaceEnabled?: boolean; errorCode?: string; } @@ -8695,6 +8699,10 @@ export type BlobGetAccountInfoResponse = BlobGetAccountInfoHeaders & { * The response status code. */ statusCode: 200; + /** + * Version 2019-07-07 and newer. Indicates if the account has a hierarchical namespace enabled. + */ + isHierarchicalNamespaceEnabled?: boolean; }; /** @@ -8705,6 +8713,10 @@ export type BlobGetAccountInfoWithHeadResponse = BlobGetAccountInfoWithHeadHeade * The response status code. */ statusCode: 200; + /** + * Version 2019-07-07 and newer. Indicates if the account has a hierarchical namespace enabled. + */ + isHierarchicalNamespaceEnabled?: boolean; }; /** diff --git a/src/blob/handlers/BlobBatchHandler.ts b/src/blob/handlers/BlobBatchHandler.ts index 98cdabfba..f2edb4ebb 100644 --- a/src/blob/handlers/BlobBatchHandler.ts +++ b/src/blob/handlers/BlobBatchHandler.ts @@ -57,7 +57,8 @@ export class BlobBatchHandler { private readonly extentStore: IExtentStore, private readonly logger: ILogger, private readonly loose: boolean, - private readonly disableProductStyle?: boolean + private readonly disableProductStyle?: boolean, + private readonly enableHierarchicalNamespace: boolean = false ) { const subRequestContextMiddleware = (req: IRequest, res: IResponse, locals: any, next: SubRequestNextFunction) => { const urlbuilder = URLBuilder.parse(req.getUrl()); @@ -165,7 +166,9 @@ export class BlobBatchHandler { this.metadataStore, this.extentStore, this.logger, - this.loose + this.loose, + this.disableProductStyle, + this.enableHierarchicalNamespace ), pageBlobHandler: new PageBlobHandler( this.metadataStore, @@ -180,7 +183,9 @@ export class BlobBatchHandler { this.metadataStore, this.extentStore, this.logger, - this.loose + this.loose, + this.disableProductStyle, + this.enableHierarchicalNamespace ) }; diff --git a/src/blob/handlers/BlobHandler.ts b/src/blob/handlers/BlobHandler.ts index 0ab7be045..3d88a226a 100644 --- a/src/blob/handlers/BlobHandler.ts +++ b/src/blob/handlers/BlobHandler.ts @@ -48,7 +48,8 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { extentStore: IExtentStore, logger: ILogger, loose: boolean, - private readonly rangesManager: IPageBlobRangesManager + private readonly rangesManager: IPageBlobRangesManager, + private readonly enableHierarchicalNamespace: boolean = false ) { super(metadataStore, extentStore, logger, loose); } @@ -960,6 +961,21 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { public async getAccountInfo( context: Context ): Promise { + // Retrieve HNS flag from container metadata + const blobCtx = new BlobStorageContext(context); + const accountName = blobCtx.account!; + const containerName = blobCtx.container!; + let hns = this.enableHierarchicalNamespace; + try { + const containerProps = await this.metadataStore.getContainerProperties( + context, accountName, containerName + ); + hns = containerProps.metadata?.["azurite_hns_enabled"] === "true" || + (containerProps.metadata?.["azurite_hns_enabled"] === undefined && this.enableHierarchicalNamespace); + } catch (error: any) { + if (error.statusCode !== 404) throw error; + // container not found — fall back to server-wide default + } const response: Models.BlobGetAccountInfoResponse = { statusCode: 200, requestId: context.contextId, @@ -967,6 +983,7 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { skuName: EMULATOR_ACCOUNT_SKUNAME, accountKind: EMULATOR_ACCOUNT_KIND, date: context.startTime!, + isHierarchicalNamespaceEnabled: hns, version: BLOB_API_VERSION }; return response; @@ -1023,14 +1040,14 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { // Start Range is bigger than blob length if (rangeStart > blob.properties.contentLength!) { - throw StorageErrorFactory.getInvalidPageRange2(context.contextId!,`bytes */${blob.properties.contentLength}`); + throw StorageErrorFactory.getInvalidPageRange2(context.contextId!, `bytes */${blob.properties.contentLength}`); } // Will automatically shift request with longer data end than blob size to blob size if (rangeEnd + 1 >= blob.properties.contentLength!) { // report error is blob size is 0, and rangeEnd is specified but not 0 if (blob.properties.contentLength == 0 && rangeEnd !== 0 && rangeEnd !== Infinity) { - throw StorageErrorFactory.getInvalidPageRange2(context.contextId!,`bytes */${blob.properties.contentLength}`); + throw StorageErrorFactory.getInvalidPageRange2(context.contextId!, `bytes */${blob.properties.contentLength}`); } else { rangeEnd = blob.properties.contentLength! - 1; @@ -1111,7 +1128,7 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { acceptRanges: "bytes", contentLength, contentRange, - contentMD5: contentRange ? (context.request!.getHeader("x-ms-range-get-content-md5") ? contentMD5: undefined) : contentMD5, + contentMD5: contentRange ? (context.request!.getHeader("x-ms-range-get-content-md5") ? contentMD5 : undefined) : contentMD5, tagCount: getBlobTagsCount(blob.blobTags), isServerEncrypted: true, clientRequestId: options.requestId, @@ -1151,14 +1168,14 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { // Start Range is bigger than blob length if (rangeStart > blob.properties.contentLength!) { - throw StorageErrorFactory.getInvalidPageRange2(context.contextId!,`bytes */${blob.properties.contentLength}`); + throw StorageErrorFactory.getInvalidPageRange2(context.contextId!, `bytes */${blob.properties.contentLength}`); } // Will automatically shift request with longer data end than blob size to blob size if (rangeEnd + 1 >= blob.properties.contentLength!) { // report error is blob size is 0, and rangeEnd is specified but not 0 if (blob.properties.contentLength == 0 && rangeEnd !== 0 && rangeEnd !== Infinity) { - throw StorageErrorFactory.getInvalidPageRange2(context.contextId!,`bytes */${blob.properties.contentLength}`); + throw StorageErrorFactory.getInvalidPageRange2(context.contextId!, `bytes */${blob.properties.contentLength}`); } else { rangeEnd = blob.properties.contentLength! - 1; @@ -1247,7 +1264,7 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { contentType: context.request!.getQuery("rsct") ?? blob.properties.contentType, contentLength, contentRange, - contentMD5: contentRange ? (context.request!.getHeader("x-ms-range-get-content-md5") ? contentMD5: undefined) : contentMD5, + contentMD5: contentRange ? (context.request!.getHeader("x-ms-range-get-content-md5") ? contentMD5 : undefined) : contentMD5, blobContentMD5: blob.properties.contentMD5, tagCount: getBlobTagsCount(blob.blobTags), isServerEncrypted: true, @@ -1337,8 +1354,7 @@ export default class BlobHandler extends BaseHandler implements IBlobHandler { try { return new URL(copySource) } - catch - { + catch { throw StorageErrorFactory.getInvalidHeaderValue( context.contextId, { diff --git a/src/blob/handlers/ContainerHandler.ts b/src/blob/handlers/ContainerHandler.ts index 66c40af6d..c2fda589a 100644 --- a/src/blob/handlers/ContainerHandler.ts +++ b/src/blob/handlers/ContainerHandler.ts @@ -38,7 +38,8 @@ export default class ContainerHandler extends BaseHandler extentStore: IExtentStore, logger: ILogger, loose: boolean, - disableProductStyle?: boolean + disableProductStyle?: boolean, + private readonly enableHierarchicalNamespace: boolean = false ) { super(metadataStore, extentStore, logger, loose); this.disableProductStyle = disableProductStyle; @@ -65,7 +66,12 @@ export default class ContainerHandler extends BaseHandler // Preserve metadata key case const metadata = convertRawHeadersToMetadata( blobCtx.request!.getRawHeaders(), context.contextId! - ); + ) ?? {}; + + // Determine HNS (Gen2) flag: explicit header overrides server default + const hnsHeader = blobCtx.request!.getHeader("x-ms-namespace-enabled"); + const hns = hnsHeader !== undefined ? hnsHeader === "true" : this.enableHierarchicalNamespace; + metadata["azurite_hns_enabled"] = hns ? "true" : "false"; await this.metadataStore.createContainer(context, { accountName, @@ -117,6 +123,13 @@ export default class ContainerHandler extends BaseHandler options.leaseAccessConditions ); + // Strip internal reserved key from user-visible metadata + const visibleMetadata = containerProperties.metadata + ? Object.fromEntries( + Object.entries(containerProperties.metadata).filter(([k]) => k !== "azurite_hns_enabled") + ) + : containerProperties.metadata; + const response: Models.ContainerGetPropertiesResponse = { statusCode: 200, requestId: context.contextId, @@ -124,7 +137,7 @@ export default class ContainerHandler extends BaseHandler eTag: containerProperties.properties.etag, ...containerProperties.properties, blobPublicAccess: containerProperties.properties.publicAccess, - metadata: containerProperties.metadata, + metadata: Object.keys(visibleMetadata ?? {}).length > 0 ? visibleMetadata : undefined, version: BLOB_API_VERSION }; @@ -203,10 +216,25 @@ export default class ContainerHandler extends BaseHandler const date = blobCtx.startTime!; const eTag = newEtag(); - // Preserve metadata key case - const metadata = convertRawHeadersToMetadata( + // Preserve metadata key case; strip client-supplied azurite_hns_enabled + const rawMetadata = convertRawHeadersToMetadata( blobCtx.request!.getRawHeaders(), context.contextId! ); + const userMetadata: { [key: string]: string } = {}; + if (rawMetadata) { + for (const [k, v] of Object.entries(rawMetadata)) { + if (k !== "azurite_hns_enabled") userMetadata[k] = v; + } + } + + // Preserve the per-container HNS flag from existing metadata + const existingProps = await this.metadataStore.getContainerProperties( + context, accountName, containerName + ); + const hnsValue = existingProps.metadata?.["azurite_hns_enabled"]; + const metadata = Object.keys(userMetadata).length > 0 || hnsValue !== undefined + ? { ...userMetadata, ...(hnsValue !== undefined ? { azurite_hns_enabled: hnsValue } : {}) } + : undefined; await this.metadataStore.setContainerMetadata( context, @@ -341,7 +369,8 @@ export default class ContainerHandler extends BaseHandler const requestBatchBoundary = blobServiceCtx.request!.getHeader("content-type")!.split("=")[1]; const blobBatchHandler = new BlobBatchHandler(this.accountDataStore, this.oauth, - this.metadataStore, this.extentStore, this.logger, this.loose, this.disableProductStyle); + this.metadataStore, this.extentStore, this.logger, this.loose, this.disableProductStyle, + this.enableHierarchicalNamespace); const responseBodyString = await blobBatchHandler.submitBatch(body, requestBatchBoundary, @@ -838,6 +867,21 @@ export default class ContainerHandler extends BaseHandler public async getAccountInfo( context: Context ): Promise { + // Retrieve HNS flag from container metadata + const blobCtx = new BlobStorageContext(context); + const accountName = blobCtx.account!; + const containerName = blobCtx.container!; + let hns = this.enableHierarchicalNamespace; + try { + const containerProps = await this.metadataStore.getContainerProperties( + context, accountName, containerName + ); + hns = containerProps.metadata?.["azurite_hns_enabled"] === "true" || + (containerProps.metadata?.["azurite_hns_enabled"] === undefined && this.enableHierarchicalNamespace); + } catch (error: any) { + if (error.statusCode !== 404) throw error; + // container not found — fall back to server-wide default + } const response: Models.ContainerGetAccountInfoResponse = { statusCode: 200, requestId: context.contextId, @@ -845,6 +889,7 @@ export default class ContainerHandler extends BaseHandler skuName: EMULATOR_ACCOUNT_SKUNAME, accountKind: EMULATOR_ACCOUNT_KIND, date: context.startTime!, + isHierarchicalNamespaceEnabled: hns, version: BLOB_API_VERSION }; return response; diff --git a/src/blob/handlers/ServiceHandler.ts b/src/blob/handlers/ServiceHandler.ts index c7cb5010e..d3c8f0123 100644 --- a/src/blob/handlers/ServiceHandler.ts +++ b/src/blob/handlers/ServiceHandler.ts @@ -8,7 +8,6 @@ import { BLOB_API_VERSION, DEFAULT_LIST_BLOBS_MAX_RESULTS, DEFAULT_LIST_CONTAINERS_MAX_RESULTS, - EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED, EMULATOR_ACCOUNT_KIND, EMULATOR_ACCOUNT_SKUNAME, HeaderConstants, @@ -43,7 +42,8 @@ export default class ServiceHandler extends BaseHandler extentStore: IExtentStore, logger: ILogger, loose: boolean, - disableProductStyle?: boolean + disableProductStyle?: boolean, + private readonly enableHierarchicalNamespace: boolean = false ) { super(metadataStore, extentStore, logger, loose); this.disableProductStyle = disableProductStyle; @@ -129,7 +129,8 @@ export default class ServiceHandler extends BaseHandler const requestBatchBoundary = blobServiceCtx.request!.getHeader("content-type")!.split("=")[1]; const blobBatchHandler = new BlobBatchHandler(this.accountDataStore, this.oauth, - this.metadataStore, this.extentStore, this.logger, this.loose, this.disableProductStyle); + this.metadataStore, this.extentStore, this.logger, this.loose, this.disableProductStyle, + this.enableHierarchicalNamespace); const responseBodyString = await blobBatchHandler.submitBatch(body, requestBatchBoundary, @@ -361,7 +362,7 @@ export default class ServiceHandler extends BaseHandler skuName: EMULATOR_ACCOUNT_SKUNAME, accountKind: EMULATOR_ACCOUNT_KIND, date: context.startTime!, - isHierarchicalNamespaceEnabled: EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED, + isHierarchicalNamespaceEnabled: this.enableHierarchicalNamespace, version: BLOB_API_VERSION }; return response; diff --git a/src/blob/main.ts b/src/blob/main.ts index 43d4d3ce8..a68941405 100644 --- a/src/blob/main.ts +++ b/src/blob/main.ts @@ -1,15 +1,13 @@ #!/usr/bin/env node import * as Logger from "../common/Logger"; import { BlobServerFactory } from "./BlobServerFactory"; -import SqlBlobServer from "./SqlBlobServer"; -import BlobServer from "./BlobServer"; import { setExtentMemoryLimit } from "../common/ConfigurationBase"; import BlobEnvironment from "./BlobEnvironment"; import { AzuriteTelemetryClient } from "../common/Telemetry"; // tslint:disable:no-console -function shutdown(server: BlobServer | SqlBlobServer) { +function shutdown(server: { close: () => Promise }) { const beforeCloseMessage = `Azurite Blob service is closing...`; const afterCloseMessage = `Azurite Blob service successfully closed`; AzuriteTelemetryClient.TraceStopEvent("Blob"); @@ -24,33 +22,25 @@ function shutdown(server: BlobServer | SqlBlobServer) { * Entry for Azurite blob service. */ async function main() { + const env = new BlobEnvironment(); + const blobServerFactory = new BlobServerFactory(); const server = await blobServerFactory.createServer(); const config = server.config; - // We use logger singleton as global debugger logger to track detailed outputs cross layers - // Note that, debug log is different from access log which is only available in request handler layer to - // track every request. Access log is not singleton, and initialized in specific RequestHandlerFactory implementations - // Enable debug log by default before first release for debugging purpose Logger.configLogger(config.enableDebugLog, config.debugLogFilePath); - let env = new BlobEnvironment(); setExtentMemoryLimit(env, true); - // Start server - console.log( - `Azurite Blob service is starting on ${config.host}:${config.port}` - ); + console.log(`Azurite Blob service is starting on ${config.host}:${config.port}`); await server.start(); - console.log( - `Azurite Blob service successfully listens on ${server.getHttpServerAddress()}` - ); - + console.log(`Azurite Blob service successfully listens on ${server.getHttpServerAddress()}`); + console.log(`Azurite DFS service is available on the same port as the Blob service.`); + const location = await env.location(); - AzuriteTelemetryClient.init(location, !env.disableTelemetry(), env); + AzuriteTelemetryClient.init(location, !env.disableTelemetry(), env); await AzuriteTelemetryClient.TraceStartEvent("Blob"); - // Handle close event process .once("message", (msg) => { if (msg === "shutdown") { diff --git a/src/blob/persistence/IBlobMetadataStore.ts b/src/blob/persistence/IBlobMetadataStore.ts index fb933f8df..63ebfbf77 100644 --- a/src/blob/persistence/IBlobMetadataStore.ts +++ b/src/blob/persistence/IBlobMetadataStore.ts @@ -1160,6 +1160,73 @@ export interface IBlobMetadataStore options: Models.AppendBlobSealOptionalParams, ): Promise; + /** + * Atomically rename a single blob (metadata-only, no extent copy). + * + * @param {Context} context + * @param {string} account + * @param {string} sourceContainer + * @param {string} sourceBlob + * @param {string} destContainer + * @param {string} destBlob + * @returns {Promise} + * @memberof IBlobMetadataStore + */ + /** + * Atomically rename a path (file or directory) and its HNS hierarchy entries + * in a single operation. For directories, all child blobs are renamed too. + * Implementations must ensure all mutations are committed together with no + * observable intermediate state. + */ + renamePathAtomic( + context: Context, + account: string, + sourceContainer: string, + sourcePath: string, + destContainer: string, + destPath: string, + isDirectory: boolean + ): Promise; + + // --------------------------------------------------------------------------- + // HNS (Hierarchical Namespace) parent-child hierarchy methods + // --------------------------------------------------------------------------- + + /** + * Register a path in the HNS hierarchy table. + * Called when creating a file or directory via DFS. + */ + registerHnsPath( + context: Context, + account: string, + container: string, + path: string, + parentPath: string | null, + isDirectory: boolean + ): Promise; + + /** + * Unregister a path from the HNS hierarchy table. + * Called when deleting a file or directory via DFS. + */ + unregisterHnsPath( + context: Context, + account: string, + container: string, + path: string + ): Promise; + + /** + * Unregister all paths under a prefix from the HNS hierarchy table. + * Called when recursively deleting a directory via DFS. + */ + unregisterHnsPathsByPrefix( + context: Context, + account: string, + container: string, + prefix: string + ): Promise; + } export default IBlobMetadataStore; diff --git a/src/blob/persistence/LokiBlobMetadataStore.ts b/src/blob/persistence/LokiBlobMetadataStore.ts index 65a953ebe..45a80e449 100644 --- a/src/blob/persistence/LokiBlobMetadataStore.ts +++ b/src/blob/persistence/LokiBlobMetadataStore.ts @@ -105,6 +105,7 @@ export default class LokiBlobMetadataStore private readonly CONTAINERS_COLLECTION = "$CONTAINERS_COLLECTION$"; private readonly BLOBS_COLLECTION = "$BLOBS_COLLECTION$"; private readonly BLOCKS_COLLECTION = "$BLOCKS_COLLECTION$"; + private readonly HNS_HIERARCHY_COLLECTION = "$HNS_HIERARCHY$"; private readonly pageBlobRangesManager = new PageBlobRangesManager(); @@ -177,6 +178,13 @@ export default class LokiBlobMetadataStore }); } + // Create HNS hierarchy collection if not exists (parent-child relationships) + if (this.db.getCollection(this.HNS_HIERARCHY_COLLECTION) === null) { + this.db.addCollection(this.HNS_HIERARCHY_COLLECTION, { + indices: ["accountName", "containerName", "path", "parentPath"] + }); + } + await new Promise((resolve, reject) => { this.db.saveDatabase((err) => { if (err) { @@ -484,6 +492,11 @@ export default class LokiBlobMetadataStore accountName: account, containerName: container }); + + const hnsColl = this.db.getCollection(this.HNS_HIERARCHY_COLLECTION); + if (hnsColl) { + hnsColl.findAndRemove({ accountName: account, containerName: container }); + } } /** @@ -3562,4 +3575,166 @@ export default class LokiBlobMetadataStore return doc.properties; } + + public async renamePathAtomic( + context: Context, + account: string, + sourceContainer: string, + sourcePath: string, + destContainer: string, + destPath: string, + isDirectory: boolean + ): Promise { + // All LokiJS operations below are synchronous — no intermediate awaits — + // so the event loop never yields and no concurrent request can observe + // a partial rename state. + const blobsColl = this.db.getCollection(this.BLOBS_COLLECTION); + const hnsColl = this.db.getCollection(this.HNS_HIERARCHY_COLLECTION); + const now = context.startTime!; + const etag = newEtag(); + + if (isDirectory) { + const sourcePrefix = sourcePath + "/"; + const destPrefix = destPath + "/"; + const children = blobsColl.find({ + accountName: account, + containerName: sourceContainer, + name: { $regex: new RegExp(`^${this.escapeRegExp(sourcePrefix)}`) } + }); + for (const child of children) { + child.containerName = destContainer; + child.name = destPrefix + child.name.substring(sourcePrefix.length); + child.properties.lastModified = now; + child.properties.etag = newEtag(); + blobsColl.update(child); + } + } + + const doc = blobsColl.findOne({ + accountName: account, + containerName: sourceContainer, + name: sourcePath, + snapshot: "" + }); + if (!doc) { + throw StorageErrorFactory.getBlobNotFound(context.contextId); + } + doc.containerName = destContainer; + doc.name = destPath; + doc.properties.lastModified = now; + doc.properties.etag = etag; + blobsColl.update(doc); + + // Re-key any uncommitted blocks staged under the old path + const blockColl = this.db.getCollection(this.BLOCKS_COLLECTION); + const stagedBlocks = blockColl.find({ + accountName: account, + containerName: sourceContainer, + blobName: sourcePath + }); + for (const blk of stagedBlocks) { + blk.containerName = destContainer; + blk.blobName = destPath; + blockColl.update(blk); + } + + const hnsDoc = hnsColl.findOne({ + accountName: account, + containerName: sourceContainer, + path: sourcePath + }); + if (hnsDoc) { + hnsDoc.containerName = destContainer; + hnsDoc.path = destPath; + hnsDoc.parentPath = destPath.includes("/") + ? destPath.substring(0, destPath.lastIndexOf("/")) + : null; + hnsColl.update(hnsDoc); + } + + const hnsSourcePrefix = sourcePath + "/"; + const hnsDestPrefix = destPath + "/"; + const hnsChildren = hnsColl.find({ + accountName: account, + containerName: sourceContainer, + path: { $regex: new RegExp(`^${this.escapeRegExp(hnsSourcePrefix)}`) } + }); + for (const child of hnsChildren) { + const relativePath = child.path.substring(hnsSourcePrefix.length); + child.containerName = destContainer; + child.path = hnsDestPrefix + relativePath; + if (child.parentPath && child.parentPath.startsWith(sourcePath)) { + child.parentPath = destPath + child.parentPath.substring(sourcePath.length); + } + hnsColl.update(child); + } + + return doc.properties; + } + + private escapeRegExp(str: string): string { + return str.replace(/[.*+?^${}()|[\]\\]/g, "\\$&"); + } + + // --------------------------------------------------------------------------- + // HNS hierarchy methods + // --------------------------------------------------------------------------- + + public async registerHnsPath( + _context: Context, + account: string, + container: string, + path: string, + parentPath: string | null, + isDirectory: boolean + ): Promise { + const coll = this.db.getCollection(this.HNS_HIERARCHY_COLLECTION); + const existing = coll.findOne({ + accountName: account, + containerName: container, + path + }); + if (existing) { + existing.parentPath = parentPath; + existing.isDirectory = isDirectory; + coll.update(existing); + } else { + coll.insert({ + accountName: account, + containerName: container, + path, + parentPath, + isDirectory + }); + } + } + + public async unregisterHnsPath( + _context: Context, + account: string, + container: string, + path: string + ): Promise { + const coll = this.db.getCollection(this.HNS_HIERARCHY_COLLECTION); + coll.findAndRemove({ + accountName: account, + containerName: container, + path + }); + } + + public async unregisterHnsPathsByPrefix( + _context: Context, + account: string, + container: string, + prefix: string + ): Promise { + const coll = this.db.getCollection(this.HNS_HIERARCHY_COLLECTION); + coll.findAndRemove({ + accountName: account, + containerName: container, + path: { $regex: new RegExp(`^${this.escapeRegExp(prefix)}`) } + }); + } + } diff --git a/src/blob/persistence/SqlBlobMetadataStore.ts b/src/blob/persistence/SqlBlobMetadataStore.ts index 73c2cf9c3..9f58a0f07 100644 --- a/src/blob/persistence/SqlBlobMetadataStore.ts +++ b/src/blob/persistence/SqlBlobMetadataStore.ts @@ -79,6 +79,7 @@ class ServicesModel extends Model { } class ContainersModel extends Model { } class BlobsModel extends Model { } class BlocksModel extends Model { } +class HnsHierarchyModel extends Model { } // class PagesModel extends Model {} interface IBlobContentProperties { @@ -370,6 +371,53 @@ export default class SqlBlobMetadataStore implements IBlobMetadataStore { } ); + // HNS hierarchy table: parent-child relationships for hierarchical namespace + HnsHierarchyModel.init( + { + id: { + type: INTEGER.UNSIGNED, + primaryKey: true, + autoIncrement: true + }, + accountName: { + type: "VARCHAR(64)", + allowNull: false + }, + containerName: { + type: "VARCHAR(255)", + allowNull: false + }, + path: { + type: "VARCHAR(1024)", + allowNull: false + }, + parentPath: { + type: "VARCHAR(1024)", + allowNull: true + }, + isDirectory: { + type: BOOLEAN, + allowNull: false, + defaultValue: false + } + }, + { + sequelize: this.sequelize, + modelName: "HnsHierarchy", + tableName: "HnsHierarchy", + timestamps: false, + indexes: [ + { + unique: true, + fields: ["accountName", "containerName", "path"] + }, + { + fields: ["accountName", "containerName", "parentPath"] + } + ] + } + ); + // TODO: sync() is only for development purpose, use migration for production await this.sequelize.sync(); @@ -660,6 +708,11 @@ export default class SqlBlobMetadataStore implements IBlobMetadataStore { }, t ); + + await HnsHierarchyModel.destroy({ + where: { accountName: account, containerName: container }, + transaction: t + }); }); } @@ -3507,6 +3560,63 @@ export default class SqlBlobMetadataStore implements IBlobMetadataStore { * @returns {Promise} * @memberof SqlBlobMetadataStore */ + /** Escape SQL LIKE wildcards in a user-controlled path string. */ + private escapeLike(path: string): string { + return path.replace(/%/g, "\\%").replace(/_/g, "\\_"); + } + + /** + * Returns a SQL literal that computes destPrefix + column[sourcePrefix.length+1:]. + * Handles dialect differences: || vs CONCAT, SUBSTR vs SUBSTRING, identifier quoting. + */ + private prefixReplaceExpr(column: string, sourcePrefix: string, destPrefix: string): ReturnType { + const escapedDest = this.sequelize.escape(destPrefix); + const startIdx = sourcePrefix.length + 1; + const dialect = this.sequelize.getDialect(); + let expr: string; + switch (dialect) { + case "mssql": + expr = `${escapedDest} + SUBSTRING([${column}], ${startIdx}, LEN([${column}]))`; + break; + case "mysql": + case "mariadb": + expr = `CONCAT(${escapedDest}, SUBSTR(\`${column}\`, ${startIdx}))`; + break; + default: // sqlite, postgres + expr = `${escapedDest} || SUBSTR("${column}", ${startIdx})`; + } + return literal(expr); + } + + /** + * Returns a SQL literal: CASE WHEN column LIKE 'sourcePath%' + * THEN destPath + column[sourcePath.length+1:] ELSE column END + * Used to rewrite parentPath entries in the HNS hierarchy table. + */ + private conditionalPrefixReplaceExpr(column: string, sourcePath: string, destPath: string): ReturnType { + const escapedLike = this.sequelize.escape(sourcePath + "%"); + const escapedDest = this.sequelize.escape(destPath); + const startIdx = sourcePath.length + 1; + const dialect = this.sequelize.getDialect(); + let thenExpr: string; + let quotedCol: string; + switch (dialect) { + case "mssql": + quotedCol = `[${column}]`; + thenExpr = `${escapedDest} + SUBSTRING(${quotedCol}, ${startIdx}, LEN(${quotedCol}))`; + break; + case "mysql": + case "mariadb": + quotedCol = `\`${column}\``; + thenExpr = `CONCAT(${escapedDest}, SUBSTR(${quotedCol}, ${startIdx}))`; + break; + default: // sqlite, postgres + quotedCol = `"${column}"`; + thenExpr = `${escapedDest} || SUBSTR(${quotedCol}, ${startIdx})`; + } + return literal(`CASE WHEN ${quotedCol} LIKE ${escapedLike} THEN ${thenExpr} ELSE ${quotedCol} END`); + } + private async deleteBlobFromSQL(where: WhereOptions, t?: Transaction): Promise { await BlobsModel.destroy({ where, @@ -3576,4 +3686,156 @@ export default class SqlBlobMetadataStore implements IBlobMetadataStore { ): Promise { throw new NotImplementedinSQLError(context.contextId); } + + public async renamePathAtomic( + context: Context, + account: string, + sourceContainer: string, + sourcePath: string, + destContainer: string, + destPath: string, + isDirectory: boolean + ): Promise { + return this.sequelize.transaction(async (t) => { + const now = new Date(); + const etag = newEtag(); + + if (isDirectory) { + const sourcePrefix = sourcePath + "/"; + const destPrefix = destPath + "/"; + await BlobsModel.update( + { + containerName: destContainer, + blobName: this.prefixReplaceExpr("blobName", sourcePrefix, destPrefix), + lastModified: now, + etag: newEtag() + } as any, + { + where: { + accountName: account, + containerName: sourceContainer, + blobName: { [Op.like]: `${this.escapeLike(sourcePrefix)}%` } + }, + transaction: t + } + ); + } + + const [affectedCount] = await BlobsModel.update( + { containerName: destContainer, blobName: destPath, lastModified: now, etag }, + { + where: { + accountName: account, + containerName: sourceContainer, + blobName: sourcePath, + snapshot: "" + }, + transaction: t + } + ); + if (affectedCount === 0) { + throw StorageErrorFactory.getBlobNotFound(context.contextId); + } + + // Re-key uncommitted blocks staged under the old path + await BlocksModel.update( + { containerName: destContainer, blobName: destPath }, + { + where: { accountName: account, containerName: sourceContainer, blobName: sourcePath }, + transaction: t + } + ); + + await HnsHierarchyModel.update( + { + containerName: destContainer, + path: destPath, + parentPath: destPath.includes("/") + ? destPath.substring(0, destPath.lastIndexOf("/")) + : null + }, + { + where: { accountName: account, containerName: sourceContainer, path: sourcePath }, + transaction: t + } + ); + + const hnsSourcePrefix = sourcePath + "/"; + const hnsDestPrefix = destPath + "/"; + await HnsHierarchyModel.update( + { + containerName: destContainer, + path: this.prefixReplaceExpr("path", hnsSourcePrefix, hnsDestPrefix), + parentPath: this.conditionalPrefixReplaceExpr("parentPath", sourcePath, destPath) + } as any, + { + where: { + accountName: account, + containerName: sourceContainer, + path: { [Op.like]: `${this.escapeLike(hnsSourcePrefix)}%` } + }, + transaction: t + } + ); + + return { lastModified: now, etag } as Models.BlobPropertiesInternal; + }).catch((err: any) => { + if (err.name === "SequelizeUniqueConstraintError") { + throw StorageErrorFactory.getBlobAlreadyExists(context.contextId); + } + throw err; + }); + } + + // --------------------------------------------------------------------------- + // HNS hierarchy methods + // --------------------------------------------------------------------------- + + public async registerHnsPath( + _context: Context, + account: string, + container: string, + path: string, + parentPath: string | null, + isDirectory: boolean + ): Promise { + await HnsHierarchyModel.upsert({ + accountName: account, + containerName: container, + path, + parentPath, + isDirectory + }); + } + + public async unregisterHnsPath( + _context: Context, + account: string, + container: string, + path: string + ): Promise { + await HnsHierarchyModel.destroy({ + where: { + accountName: account, + containerName: container, + path + } + }); + } + + public async unregisterHnsPathsByPrefix( + _context: Context, + account: string, + container: string, + prefix: string + ): Promise { + await HnsHierarchyModel.destroy({ + where: { + accountName: account, + containerName: container, + path: { [Op.like]: `${this.escapeLike(prefix)}%` } + } + }); + } + } diff --git a/src/blob/utils/constants.ts b/src/blob/utils/constants.ts index c641b50de..1a5c9fa9e 100644 --- a/src/blob/utils/constants.ts +++ b/src/blob/utils/constants.ts @@ -30,7 +30,10 @@ export const EMULATOR_ACCOUNT_KEY = Buffer.from( export const EMULATOR_ACCOUNT_SKUNAME = Models.SkuName.StandardRAGRS; export const EMULATOR_ACCOUNT_KIND = Models.AccountKind.StorageV2; -export const EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED = false; +export const EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED_DEFAULT = true; +// Alias for backward compatibility — existing code imports this name +export const EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED = + EMULATOR_ACCOUNT_ISHIERARCHICALNAMESPACEENABLED_DEFAULT; export const DEFAULT_BLOB_KEEP_ALIVE_TIMEOUT = 5; diff --git a/src/common/ConfigurationBase.ts b/src/common/ConfigurationBase.ts index be3f0928b..036437797 100644 --- a/src/common/ConfigurationBase.ts +++ b/src/common/ConfigurationBase.ts @@ -92,9 +92,13 @@ export default abstract class ConfigurationBase { public getOAuthLevel(): undefined | OAuthLevel { if (this.oauth) { - if (this.oauth.toLowerCase() === "basic") { + const level = this.oauth.toLowerCase(); + if (level === "basic") { return OAuthLevel.BASIC; } + if (level === "acl") { + return OAuthLevel.ACL; + } } return; diff --git a/src/common/Environment.ts b/src/common/Environment.ts index e42c4fddb..d35e2de1e 100644 --- a/src/common/Environment.ts +++ b/src/common/Environment.ts @@ -109,7 +109,11 @@ args ) .option( ["", "disableTelemetry"], - "Optional. Disable telemtry collection of Azurite. If not specify this parameter Azurite will collect telemetry data by default." + "Optional. Disable telemetry collection of Azurite. If not specify this parameter Azurite will collect telemetry data by default." + ) + .option( + ["", "enableHierarchicalNamespace"], + "Optional. Enable hierarchical namespace (HNS) mode for ADLS Gen2. Default is true." ); (args as any).config.name = "azurite"; @@ -222,6 +226,14 @@ export default class Environment implements IEnvironment { return this.flags.extentMemoryLimit; } + public enableHierarchicalNamespace(): boolean { + const val = this.flags.enableHierarchicalNamespace; + if (val !== undefined) { + return val !== false && val !== "false"; + } + return true; // default enabled + } + public disableTelemetry(): boolean { if (this.flags.disableTelemetry !== undefined) { return true; diff --git a/src/common/VSCEnvironment.ts b/src/common/VSCEnvironment.ts index 0bcff08f5..dcda58bf8 100644 --- a/src/common/VSCEnvironment.ts +++ b/src/common/VSCEnvironment.ts @@ -135,4 +135,10 @@ export default class VSCEnvironment implements IEnvironment { this.workspaceConfiguration.get("disableTelemetry") || false ); } + + public enableHierarchicalNamespace(): boolean { + return ( + this.workspaceConfiguration.get("enableHierarchicalNamespace") ?? true + ); + } } diff --git a/src/common/models.ts b/src/common/models.ts index e48961e35..2b3c15e8d 100644 --- a/src/common/models.ts +++ b/src/common/models.ts @@ -1,3 +1,4 @@ export enum OAuthLevel { - BASIC // Phase 1 + BASIC, // Phase 1: Token format/lifetime/issuer validation only + ACL // Phase 3: Token validation + ACL enforcement on DFS paths } diff --git a/swagger/dfs-storage-2023-11-03.json b/swagger/dfs-storage-2023-11-03.json new file mode 100644 index 000000000..a80377145 --- /dev/null +++ b/swagger/dfs-storage-2023-11-03.json @@ -0,0 +1,768 @@ +{ + "swagger": "2.0", + "info": { + "title": "Azure Data Lake Storage REST API", + "version": "2023-11-03", + "description": "Azure Data Lake Storage provides an ADLS Gen2 (DFS) REST API for hierarchical namespace operations." + }, + "x-ms-parameterized-host": { + "hostTemplate": "{url}", + "useSchemePrefix": false, + "positionInOperation": "first", + "parameters": [ + { + "$ref": "#/parameters/Url" + } + ] + }, + "schemes": ["https"], + "consumes": ["application/json"], + "produces": ["application/json"], + "paths": {}, + "x-ms-paths": { + "/?resource=account": { + "get": { + "operationId": "Filesystem_List", + "summary": "List Filesystems", + "description": "List filesystems and their properties in given account.", + "parameters": [ + { "$ref": "#/parameters/Prefix" }, + { "$ref": "#/parameters/Continuation" }, + { "$ref": "#/parameters/MaxResults" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "x-ms-continuation": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" }, + "Date": { "type": "string", "format": "date-time-rfc1123" } + }, + "schema": { "$ref": "#/definitions/FilesystemList" } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + } + }, + "/{filesystem}?resource=filesystem": { + "put": { + "operationId": "Filesystem_Create", + "summary": "Create Filesystem", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Properties" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "201": { + "description": "Created", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-namespace-enabled": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "delete": { + "operationId": "Filesystem_Delete", + "summary": "Delete Filesystem", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "202": { + "description": "Accepted", + "headers": { + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "head": { + "operationId": "Filesystem_GetProperties", + "summary": "Get Filesystem Properties", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-properties": { "type": "string" }, + "x-ms-namespace-enabled": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "patch": { + "operationId": "Filesystem_SetProperties", + "summary": "Set Filesystem Properties", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Properties" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "get": { + "operationId": "Filesystem_ListPaths", + "summary": "List Paths", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Directory" }, + { "$ref": "#/parameters/RecursiveRequired" }, + { "$ref": "#/parameters/Continuation" }, + { "$ref": "#/parameters/MaxResults" }, + { "$ref": "#/parameters/Upn" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "x-ms-continuation": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + }, + "schema": { "$ref": "#/definitions/PathList" } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + } + }, + "/{filesystem}/{path}": { + "put": { + "operationId": "Path_Create", + "summary": "Create or Rename Path", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Path" }, + { "$ref": "#/parameters/Resource" }, + { "$ref": "#/parameters/Continuation" }, + { "$ref": "#/parameters/RenameSource" }, + { "$ref": "#/parameters/Properties" }, + { "$ref": "#/parameters/Permissions" }, + { "$ref": "#/parameters/Umask" }, + { "$ref": "#/parameters/IfMatch" }, + { "$ref": "#/parameters/IfNoneMatch" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/SourceIfMatch" }, + { "$ref": "#/parameters/SourceIfNoneMatch" }, + { "$ref": "#/parameters/SourceIfModifiedSince" }, + { "$ref": "#/parameters/SourceIfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "201": { + "description": "Created", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-continuation": { "type": "string" }, + "Content-Length": { "type": "integer", "format": "int64" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "head": { + "operationId": "Path_GetProperties", + "summary": "Get Properties / Get Access Control", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Path" }, + { "$ref": "#/parameters/Action_GetAccessControl" }, + { "$ref": "#/parameters/Upn" }, + { "$ref": "#/parameters/LeaseIdOptional" }, + { "$ref": "#/parameters/IfMatch" }, + { "$ref": "#/parameters/IfNoneMatch" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-resource-type": { "type": "string" }, + "x-ms-properties": { "type": "string" }, + "x-ms-owner": { "type": "string" }, + "x-ms-group": { "type": "string" }, + "x-ms-permissions": { "type": "string" }, + "x-ms-acl": { "type": "string" }, + "x-ms-lease-duration": { "type": "string" }, + "x-ms-lease-state": { "type": "string" }, + "x-ms-lease-status": { "type": "string" }, + "Content-Length": { "type": "integer", "format": "int64" }, + "Content-Type": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "get": { + "operationId": "Path_Read", + "summary": "Read File", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Path" }, + { "$ref": "#/parameters/Range" }, + { "$ref": "#/parameters/LeaseIdOptional" }, + { "$ref": "#/parameters/IfMatch" }, + { "$ref": "#/parameters/IfNoneMatch" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "Accept-Ranges": { "type": "string" }, + "Content-Range": { "type": "string" }, + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-resource-type": { "type": "string" }, + "x-ms-properties": { "type": "string" }, + "x-ms-lease-duration": { "type": "string" }, + "x-ms-lease-state": { "type": "string" }, + "x-ms-lease-status": { "type": "string" }, + "Content-Length": { "type": "integer", "format": "int64" }, + "Content-Type": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + }, + "schema": { "type": "object", "format": "file" } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "patch": { + "operationId": "Path_Update", + "summary": "Append/Flush/SetAccessControl", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Path" }, + { "$ref": "#/parameters/Action_Required" }, + { "$ref": "#/parameters/Mode" }, + { "$ref": "#/parameters/Position" }, + { "$ref": "#/parameters/RetainUncommittedData" }, + { "$ref": "#/parameters/Close" }, + { "$ref": "#/parameters/ContentLength" }, + { "$ref": "#/parameters/ContentMD5" }, + { "$ref": "#/parameters/Properties" }, + { "$ref": "#/parameters/Owner" }, + { "$ref": "#/parameters/Group" }, + { "$ref": "#/parameters/PermissionsOptional" }, + { "$ref": "#/parameters/Acl" }, + { "$ref": "#/parameters/LeaseIdOptional" }, + { "$ref": "#/parameters/IfMatch" }, + { "$ref": "#/parameters/IfNoneMatch" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/MaxRecords" }, + { "$ref": "#/parameters/Continuation" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "SetAccessControl/Flush success", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "202": { + "description": "Append accepted", + "headers": { + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + }, + "delete": { + "operationId": "Path_Delete", + "summary": "Delete Path", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Path" }, + { "$ref": "#/parameters/RecursiveOptional" }, + { "$ref": "#/parameters/Continuation" }, + { "$ref": "#/parameters/LeaseIdOptional" }, + { "$ref": "#/parameters/IfMatch" }, + { "$ref": "#/parameters/IfNoneMatch" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "OK", + "headers": { + "x-ms-continuation": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + } + }, + "/{filesystem}/{path}?comp=lease": { + "post": { + "operationId": "Path_Lease", + "summary": "Lease Path", + "parameters": [ + { "$ref": "#/parameters/Filesystem" }, + { "$ref": "#/parameters/Path" }, + { "$ref": "#/parameters/LeaseAction" }, + { "$ref": "#/parameters/LeaseDuration" }, + { "$ref": "#/parameters/LeaseBreakPeriod" }, + { "$ref": "#/parameters/LeaseIdOptional" }, + { "$ref": "#/parameters/ProposedLeaseId" }, + { "$ref": "#/parameters/IfMatch" }, + { "$ref": "#/parameters/IfNoneMatch" }, + { "$ref": "#/parameters/IfModifiedSince" }, + { "$ref": "#/parameters/IfUnmodifiedSince" }, + { "$ref": "#/parameters/ApiVersionParameter" }, + { "$ref": "#/parameters/ClientRequestId" } + ], + "responses": { + "200": { + "description": "Lease renewed/changed/released", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-lease-id": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "201": { + "description": "Lease acquired", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-lease-id": { "type": "string" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "202": { + "description": "Lease broken", + "headers": { + "ETag": { "type": "string" }, + "Last-Modified": { "type": "string", "format": "date-time-rfc1123" }, + "x-ms-lease-time": { "type": "integer" }, + "x-ms-request-id": { "type": "string" }, + "x-ms-version": { "type": "string" } + } + }, + "default": { + "description": "Failure", + "schema": { "$ref": "#/definitions/StorageError" } + } + } + } + } + }, + "definitions": { + "StorageError": { + "type": "object", + "properties": { + "error": { + "type": "object", + "properties": { + "code": { "type": "string" }, + "message": { "type": "string" } + } + } + } + }, + "FilesystemList": { + "type": "object", + "properties": { + "filesystems": { + "type": "array", + "items": { "$ref": "#/definitions/Filesystem" } + } + } + }, + "Filesystem": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "lastModified": { "type": "string", "format": "date-time-rfc1123" }, + "eTag": { "type": "string" } + } + }, + "PathList": { + "type": "object", + "properties": { + "paths": { + "type": "array", + "items": { "$ref": "#/definitions/PathItem" } + } + } + }, + "PathItem": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "isDirectory": { "type": "boolean" }, + "lastModified": { "type": "string", "format": "date-time-rfc1123" }, + "eTag": { "type": "string" }, + "contentLength": { "type": "integer", "format": "int64" }, + "owner": { "type": "string" }, + "group": { "type": "string" }, + "permissions": { "type": "string" } + } + }, + "SetAccessControlRecursiveResponse": { + "type": "object", + "properties": { + "directoriesSuccessful": { "type": "integer" }, + "filesSuccessful": { "type": "integer" }, + "failureCount": { "type": "integer" }, + "failedEntries": { + "type": "array", + "items": { "$ref": "#/definitions/AclFailedEntry" } + } + } + }, + "AclFailedEntry": { + "type": "object", + "properties": { + "name": { "type": "string" }, + "type": { "type": "string" }, + "errorMessage": { "type": "string" } + } + } + }, + "parameters": { + "Url": { + "name": "url", + "in": "path", + "required": true, + "type": "string", + "x-ms-skip-url-encoding": true + }, + "Filesystem": { + "name": "filesystem", + "in": "path", + "required": true, + "type": "string" + }, + "Path": { + "name": "path", + "in": "path", + "required": true, + "type": "string" + }, + "Resource": { + "name": "resource", + "in": "query", + "type": "string", + "enum": ["filesystem", "file", "directory"] + }, + "Directory": { + "name": "directory", + "in": "query", + "type": "string" + }, + "RecursiveRequired": { + "name": "recursive", + "in": "query", + "required": true, + "type": "boolean" + }, + "RecursiveOptional": { + "name": "recursive", + "in": "query", + "type": "boolean" + }, + "Continuation": { + "name": "continuation", + "in": "query", + "type": "string" + }, + "MaxResults": { + "name": "maxResults", + "in": "query", + "type": "integer", + "minimum": 1 + }, + "Prefix": { + "name": "prefix", + "in": "query", + "type": "string" + }, + "Upn": { + "name": "upn", + "in": "query", + "type": "boolean" + }, + "Action_Required": { + "name": "action", + "in": "query", + "required": true, + "type": "string", + "enum": ["append", "flush", "setAccessControl", "setAccessControlRecursive", "setProperties"] + }, + "Action_GetAccessControl": { + "name": "action", + "in": "query", + "type": "string", + "enum": ["getAccessControl", "getStatus"] + }, + "Mode": { + "name": "mode", + "in": "query", + "type": "string", + "enum": ["set", "modify", "remove"] + }, + "Position": { + "name": "position", + "in": "query", + "type": "integer", + "format": "int64" + }, + "RetainUncommittedData": { + "name": "retainUncommittedData", + "in": "query", + "type": "boolean" + }, + "Close": { + "name": "close", + "in": "query", + "type": "boolean" + }, + "MaxRecords": { + "name": "maxRecords", + "in": "query", + "type": "integer" + }, + "ContentLength": { + "name": "Content-Length", + "in": "header", + "type": "integer", + "format": "int64" + }, + "ContentMD5": { + "name": "Content-MD5", + "in": "header", + "type": "string" + }, + "Range": { + "name": "Range", + "in": "header", + "type": "string" + }, + "Properties": { + "name": "x-ms-properties", + "in": "header", + "type": "string" + }, + "Owner": { + "name": "x-ms-owner", + "in": "header", + "type": "string" + }, + "Group": { + "name": "x-ms-group", + "in": "header", + "type": "string" + }, + "Permissions": { + "name": "x-ms-permissions", + "in": "header", + "type": "string" + }, + "PermissionsOptional": { + "name": "x-ms-permissions", + "in": "header", + "type": "string" + }, + "Umask": { + "name": "x-ms-umask", + "in": "header", + "type": "string" + }, + "Acl": { + "name": "x-ms-acl", + "in": "header", + "type": "string" + }, + "RenameSource": { + "name": "x-ms-rename-source", + "in": "header", + "type": "string" + }, + "LeaseAction": { + "name": "x-ms-lease-action", + "in": "header", + "required": true, + "type": "string", + "enum": ["acquire", "release", "renew", "break", "change"] + }, + "LeaseDuration": { + "name": "x-ms-lease-duration", + "in": "header", + "type": "integer" + }, + "LeaseBreakPeriod": { + "name": "x-ms-lease-break-period", + "in": "header", + "type": "integer" + }, + "LeaseIdOptional": { + "name": "x-ms-lease-id", + "in": "header", + "type": "string" + }, + "ProposedLeaseId": { + "name": "x-ms-proposed-lease-id", + "in": "header", + "type": "string" + }, + "IfMatch": { + "name": "If-Match", + "in": "header", + "type": "string" + }, + "IfNoneMatch": { + "name": "If-None-Match", + "in": "header", + "type": "string" + }, + "IfModifiedSince": { + "name": "If-Modified-Since", + "in": "header", + "type": "string", + "format": "date-time-rfc1123" + }, + "IfUnmodifiedSince": { + "name": "If-Unmodified-Since", + "in": "header", + "type": "string", + "format": "date-time-rfc1123" + }, + "SourceIfMatch": { + "name": "x-ms-source-if-match", + "in": "header", + "type": "string" + }, + "SourceIfNoneMatch": { + "name": "x-ms-source-if-none-match", + "in": "header", + "type": "string" + }, + "SourceIfModifiedSince": { + "name": "x-ms-source-if-modified-since", + "in": "header", + "type": "string", + "format": "date-time-rfc1123" + }, + "SourceIfUnmodifiedSince": { + "name": "x-ms-source-if-unmodified-since", + "in": "header", + "type": "string", + "format": "date-time-rfc1123" + }, + "ApiVersionParameter": { + "name": "x-ms-version", + "in": "header", + "type": "string" + }, + "ClientRequestId": { + "name": "x-ms-client-request-id", + "in": "header", + "type": "string" + } + } +} diff --git a/swagger/dfs.md b/swagger/dfs.md new file mode 100644 index 000000000..f4d44b572 --- /dev/null +++ b/swagger/dfs.md @@ -0,0 +1,35 @@ +# Azurite Server DFS (ADLS Gen2) + +> see https://aka.ms/autorest + +```yaml +package-name: azurite-server-dfs +title: AzuriteServerDfs +description: Azurite Server for DFS (ADLS Gen2) +enable-xml: false +generate-metadata: false +license-header: MICROSOFT_MIT_NO_VERSION +output-folder: ../src/blob/generated-dfs +input-file: dfs-storage-2023-11-03.json +model-date-time-as-string: true +optional-response-headers: true +enum-types: true +``` + +## Notes + +The DFS API uses JSON (not XML like Blob), so `enable-xml` is false. + +The AutoRest code generator used by Azurite (`autorest.typescript.server`) is a +custom fork not publicly available. To regenerate, run: + +``` +autorest ./swagger/dfs.md --typescript --use= +``` + +## Changes Made to Standard DFS Swagger + +1. Made `x-ms-version` header optional to match emulator behavior. +2. Added `x-ms-rename-source` header to Path_Create for rename operations. +3. Made `resource` query parameter optional on Path operations (required only for create). +4. Added lease action operations as distinct operations rather than header-dispatched variants. diff --git a/tests/BlobTestServerFactory.ts b/tests/BlobTestServerFactory.ts index d8c4311cf..c7b655022 100644 --- a/tests/BlobTestServerFactory.ts +++ b/tests/BlobTestServerFactory.ts @@ -11,7 +11,8 @@ export default class BlobTestServerFactory { loose: boolean = false, skipApiVersionCheck: boolean = false, https: boolean = false, - oauth?: string + oauth?: string, + enableHierarchicalNamespace: boolean = false ): BlobServer | SqlBlobServer { const databaseConnectionString = process.env.AZURITE_TEST_DB; const isSQL = databaseConnectionString !== undefined; @@ -52,6 +53,7 @@ export default class BlobTestServerFactory { undefined, oauth, undefined, + enableHierarchicalNamespace ); return new SqlBlobServer(config); @@ -76,7 +78,9 @@ export default class BlobTestServerFactory { undefined, oauth, undefined, - inMemoryPersistence + inMemoryPersistence, + undefined, + enableHierarchicalNamespace ); return new BlobServer(config); } diff --git a/tests/blob/dfsAclEnforcer.test.ts b/tests/blob/dfsAclEnforcer.test.ts new file mode 100644 index 000000000..00c05f86f --- /dev/null +++ b/tests/blob/dfsAclEnforcer.test.ts @@ -0,0 +1,172 @@ +/** + * Unit tests for DFS ACL enforcement logic. + * + * These test the pure ACL evaluation algorithm without requiring a running + * server or real JWT tokens. The enforcer is invoked by PathHandler when + * --oauth acl is enabled (Phase III). + */ + +import * as assert from "assert"; +import { + checkAcl, + parseAcl, + AclPermission +} from "../../src/blob/dfs/DfsAclEnforcer"; +import { IDfsAuthenticatedIdentity } from "../../src/blob/dfs/DfsContext"; + +describe("DFS ACL Enforcer", () => { + + describe("parseAcl", () => { + it("parses a valid ACL string", () => { + const acl = parseAcl("user::rwx,user:abc-123:r-x,group::r--,mask::rwx,other::---"); + assert.strictEqual(acl.length, 5); + + assert.strictEqual(acl[0].type, "user"); + assert.strictEqual(acl[0].entityId, ""); + assert.strictEqual(acl[0].read, true); + assert.strictEqual(acl[0].write, true); + assert.strictEqual(acl[0].execute, true); + + assert.strictEqual(acl[1].type, "user"); + assert.strictEqual(acl[1].entityId, "abc-123"); + assert.strictEqual(acl[1].read, true); + assert.strictEqual(acl[1].write, false); + assert.strictEqual(acl[1].execute, true); + + assert.strictEqual(acl[3].type, "mask"); + assert.strictEqual(acl[4].type, "other"); + assert.strictEqual(acl[4].read, false); + }); + + it("returns empty array for undefined", () => { + assert.deepStrictEqual(parseAcl(undefined), []); + }); + + it("returns empty array for empty string", () => { + assert.deepStrictEqual(parseAcl(""), []); + }); + }); + + describe("checkAcl — bypass scenarios", () => { + it("bypasses when no identity (emulator mode)", () => { + const result = checkAcl(undefined, "owner1", "group1", "rwxr-x---", undefined, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("emulator mode")); + }); + + it("bypasses when identity has no oid or upn", () => { + const identity: IDfsAuthenticatedIdentity = {}; + const result = checkAcl(identity, "owner1", "group1", "rwxr-x---", undefined, "r"); + assert.strictEqual(result.allowed, true); + }); + + it("bypasses when owner is $superuser", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "user1" }; + const result = checkAcl(identity, "$superuser", "$superuser", "rwxr-x---", undefined, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("$superuser")); + }); + + it("bypasses when owner is undefined (defaults to $superuser)", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "user1" }; + const result = checkAcl(identity, undefined, undefined, undefined, undefined, "r"); + assert.strictEqual(result.allowed, true); + }); + }); + + describe("checkAcl — owner permissions", () => { + it("allows owner with read permission", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "owner1" }; + const result = checkAcl(identity, "owner1", "group1", "r-x------", undefined, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("Owner")); + }); + + it("denies owner without write permission", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "owner1" }; + const result = checkAcl(identity, "owner1", "group1", "r-x------", undefined, "w"); + assert.strictEqual(result.allowed, false); + }); + + it("allows owner with full rwx permissions", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "owner1" }; + for (const perm of ["r", "w", "x"] as AclPermission[]) { + const result = checkAcl(identity, "owner1", "group1", "rwx------", undefined, perm); + assert.strictEqual(result.allowed, true, `Expected owner to have ${perm}`); + } + }); + }); + + describe("checkAcl — named user ACL entries", () => { + it("allows named user with matching ACL entry", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "user-abc" }; + const acl = "user::rwx,user:user-abc:r-x,group::r--,other::---"; + const result = checkAcl(identity, "owner1", "group1", "rwxr-----", acl, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("Named user")); + }); + + it("denies named user without required permission", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "user-abc" }; + const acl = "user::rwx,user:user-abc:r--,group::r--,other::---"; + const result = checkAcl(identity, "owner1", "group1", "rwxr-----", acl, "w"); + assert.strictEqual(result.allowed, false); + }); + + it("applies mask to named user permissions", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "user-abc" }; + // user has rwx but mask limits to r-- + const acl = "user::rwx,user:user-abc:rwx,mask::r--,group::r--,other::---"; + const result = checkAcl(identity, "owner1", "group1", "rwxr-----", acl, "w"); + assert.strictEqual(result.allowed, false); // mask denies write + }); + }); + + describe("checkAcl — other permissions", () => { + it("falls through to other permissions for unknown user", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "stranger" }; + const result = checkAcl(identity, "owner1", "group1", "rwxr-xr--", undefined, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("Other")); + }); + + it("denies stranger when other has no permissions", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "stranger" }; + const result = checkAcl(identity, "owner1", "group1", "rwxr-x---", undefined, "r"); + assert.strictEqual(result.allowed, false); + }); + + it("allows stranger when other has read", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "stranger" }; + const result = checkAcl(identity, "owner1", "group1", "------r--", undefined, "r"); + assert.strictEqual(result.allowed, true); + }); + }); + + describe("checkAcl — group permissions", () => { + it("allows group member with group permissions", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "group1" }; + const result = checkAcl(identity, "owner1", "group1", "---rwx---", undefined, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("Group")); + }); + + it("denies group member without required permission", () => { + const identity: IDfsAuthenticatedIdentity = { oid: "group1" }; + const result = checkAcl(identity, "owner1", "group1", "------r--", undefined, "r"); + // group perms are chars 3-5 = "---" → denied, falls to other = "r--" + // Actually the caller matches group so it checks group perms first + // chars 3-5 = "---" → denied + assert.strictEqual(result.allowed, false); + }); + }); + + describe("checkAcl — UPN matching", () => { + it("matches identity by upn when oid is not set", () => { + const identity: IDfsAuthenticatedIdentity = { upn: "user@example.com" }; + const result = checkAcl(identity, "user@example.com", "group1", "rwx------", undefined, "r"); + assert.strictEqual(result.allowed, true); + assert.ok(result.reason.includes("Owner")); + }); + }); +}); diff --git a/tests/blob/dfsAclIntegration.test.ts b/tests/blob/dfsAclIntegration.test.ts new file mode 100644 index 000000000..5142f977d --- /dev/null +++ b/tests/blob/dfsAclIntegration.test.ts @@ -0,0 +1,213 @@ +/** + * End-to-end OAuth + ACL enforcement tests for the DFS (ADLS Gen2) endpoint. + * + * Starts an HTTPS server with --oauth acl and HNS enabled. Setup operations + * (creating filesystems, files, setting ACLs) use SAS tokens, which are + * signed via SharedKey and therefore bypass ACL enforcement (no identity). + * Enforcement is then exercised by sending requests with Bearer tokens + * carrying specific OID claims. + */ + +import { + AccountSASPermissions, + AccountSASResourceTypes, + AccountSASServices, + generateAccountSASQueryParameters, + SASProtocol, + StorageSharedKeyCredential +} from "@azure/storage-blob"; +import * as assert from "assert"; +import axios from "axios"; +import * as https from "https"; + +import { configLogger } from "../../src/common/Logger"; +import BlobTestServerFactory from "../BlobTestServerFactory"; +import { + EMULATOR_ACCOUNT_KEY, + EMULATOR_ACCOUNT_NAME, + generateJWTToken, + getUniqueName +} from "../testutils"; + +configLogger(false); + +const httpsAgent = new https.Agent({ rejectUnauthorized: false }); + +const TEST_OID = "aaaaaaaa-bbbb-cccc-dddd-eeeeeeeeeeee"; +const OTHER_OID = "11111111-2222-3333-4444-555555555555"; + +function makeToken(oid: string): string { + return generateJWTToken( + new Date("2019/01/01"), + new Date("2019/01/01"), + new Date("2100/01/01"), + "https://sts.windows-ppe.net/ab1f708d-50f6-404c-a006-d71b2ac7a606/", + "https://storage.azure.com", + "user_impersonation", + oid, + "dd0d0df1-06c3-436c-8034-4b9a153097ce" + ); +} + +describe("DFS OAuth ACL enforcement", () => { + const factory = new BlobTestServerFactory(); + const server = factory.createServer(false, true, true, "acl", true); + const host = server.config.host; + const port = server.config.port; + const baseUrl = `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}`; + + const sas = generateAccountSASQueryParameters( + { + expiresOn: new Date(Date.now() + 60 * 60 * 1000), + startsOn: new Date(Date.now() - 10 * 60 * 1000), + permissions: AccountSASPermissions.parse("rwdlacupitfx"), + resourceTypes: AccountSASResourceTypes.parse("sco").toString(), + services: AccountSASServices.parse("b").toString(), + protocol: SASProtocol.HttpsAndHttp + }, + new StorageSharedKeyCredential(EMULATOR_ACCOUNT_NAME, EMULATOR_ACCOUNT_KEY) + ).toString(); + + function url(filesystem: string, path?: string, query?: string): string { + const p = path ? `/${path}` : ""; + const q = query ? `?${query}&${sas}` : `?${sas}`; + return `${baseUrl}/${filesystem}${p}${q}`; + } + + async function sasRequest(method: string, endpoint: string, headers: Record = {}, body?: any) { + return axios.request({ + method, url: endpoint, data: body ?? null, + headers: { "x-ms-version": "2025-11-05", "User-Agent": "azsdk-js/storage-file-datalake", ...headers }, + httpsAgent, validateStatus: () => true + }); + } + + async function bearerRequest(method: string, endpoint: string, oid: string, headers: Record = {}, body?: any) { + return axios.request({ + method, url: endpoint, data: body ?? null, + headers: { + "x-ms-version": "2025-11-05", + "Authorization": `Bearer ${makeToken(oid)}`, + "User-Agent": "azsdk-js/storage-file-datalake", + ...headers + }, + httpsAgent, validateStatus: () => true + }); + } + + before(async () => { await server.start(); }); + after(async () => { await server.close(); await server.clean(); }); + + it("allows read when user OID matches named user ACL entry @loki", async () => { + const fs = getUniqueName("aclfs"); + + // Create filesystem + file via SAS (bypasses ACL enforcement — no identity) + await sasRequest("PUT", url(fs, undefined, "resource=filesystem")); + await sasRequest("PUT", url(fs, "data.txt", "resource=file")); + await sasRequest("PATCH", url(fs, "data.txt", "action=append&position=0"), + { "Content-Type": "application/octet-stream" }, "hello"); + await sasRequest("PATCH", url(fs, "data.txt", "action=flush&position=5")); + + // Set ACL: deny owner/group/other, but allow TEST_OID named user read + await sasRequest("PATCH", url(fs, "data.txt", "action=setAccessControl"), { + "x-ms-owner": "$superuser", + "x-ms-group": "$superuser", + "x-ms-acl": `user::---,user:${TEST_OID}:r--,group::---,other::---` + }); + + // TEST_OID can read (has r-- on file) + const allowed = await bearerRequest("GET", `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}/${fs}/data.txt`, TEST_OID); + assert.strictEqual(allowed.status, 200, `Expected TEST_OID read to be allowed, got ${allowed.status}`); + + // OTHER_OID is denied (falls through to other::---) + const denied = await bearerRequest("GET", `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}/${fs}/data.txt`, OTHER_OID); + assert.strictEqual(denied.status, 403, `Expected OTHER_OID read to be denied, got ${denied.status}`); + + await sasRequest("DELETE", url(fs, undefined, "resource=filesystem")); + }); + + it("allows write when user is owner @loki", async () => { + const fs = getUniqueName("aclfs"); + + await sasRequest("PUT", url(fs, undefined, "resource=filesystem")); + await sasRequest("PUT", url(fs, "writable.txt", "resource=file")); + + // Make TEST_OID the owner with rw- permissions; deny others + await sasRequest("PATCH", url(fs, "writable.txt", "action=setAccessControl"), { + "x-ms-owner": TEST_OID, + "x-ms-acl": "user::rw-,group::---,other::---" + }); + + // Owner can append (write) + const allowed = await bearerRequest( + "PATCH", + `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}/${fs}/writable.txt?action=append&position=0`, + TEST_OID, + { "Content-Type": "application/octet-stream" }, + "data" + ); + assert.strictEqual(allowed.status, 202, `Expected owner write to be allowed, got ${allowed.status}`); + + // Non-owner is denied + const denied = await bearerRequest( + "PATCH", + `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}/${fs}/writable.txt?action=append&position=4`, + OTHER_OID, + { "Content-Type": "application/octet-stream" }, + "more" + ); + assert.strictEqual(denied.status, 403, `Expected non-owner write to be denied, got ${denied.status}`); + + await sasRequest("DELETE", url(fs, undefined, "resource=filesystem")); + }); + + it("denies file create when caller lacks write on parent directory @loki", async () => { + const fs = getUniqueName("aclfs"); + + await sasRequest("PUT", url(fs, undefined, "resource=filesystem")); + await sasRequest("PUT", url(fs, "dir", "resource=directory")); + + // Set ACL on dir: owner is TEST_OID with rwx, others have no write + await sasRequest("PATCH", url(fs, "dir", "action=setAccessControl"), { + "x-ms-owner": TEST_OID, + "x-ms-acl": "user::rwx,group::r-x,other::r-x" + }); + + // TEST_OID (owner with w) can create a file inside dir + const allowed = await bearerRequest( + "PUT", + `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}/${fs}/dir/new.txt?resource=file`, + TEST_OID + ); + assert.strictEqual(allowed.status, 201, `Expected owner to be allowed to create, got ${allowed.status}`); + + // OTHER_OID (other, no w) cannot create inside dir + const denied = await bearerRequest( + "PUT", + `https://${host}:${port}/${EMULATOR_ACCOUNT_NAME}/${fs}/dir/forbidden.txt?resource=file`, + OTHER_OID + ); + assert.strictEqual(denied.status, 403, `Expected non-owner to be denied create, got ${denied.status}`); + + await sasRequest("DELETE", url(fs, undefined, "resource=filesystem")); + }); + + it("SAS requests bypass ACL enforcement entirely @loki", async () => { + const fs = getUniqueName("aclfs"); + + await sasRequest("PUT", url(fs, undefined, "resource=filesystem")); + await sasRequest("PUT", url(fs, "locked.txt", "resource=file")); + + // ACL denies everyone + await sasRequest("PATCH", url(fs, "locked.txt", "action=setAccessControl"), { + "x-ms-owner": "$superuser", + "x-ms-acl": "user::---,group::---,other::---" + }); + + // SAS request (no identity extracted) still succeeds + const res = await sasRequest("HEAD", url(fs, "locked.txt")); + assert.strictEqual(res.status, 200, `Expected SAS request to bypass ACL, got ${res.status}`); + + await sasRequest("DELETE", url(fs, undefined, "resource=filesystem")); + }); +}); diff --git a/tests/blob/dfsProxy.test.ts b/tests/blob/dfsProxy.test.ts new file mode 100644 index 000000000..3472cff3a --- /dev/null +++ b/tests/blob/dfsProxy.test.ts @@ -0,0 +1,1313 @@ +import { + AccountSASPermissions, + AccountSASResourceTypes, + AccountSASServices, + BlobServiceClient, + generateAccountSASQueryParameters, + newPipeline, + SASProtocol, + StorageSharedKeyCredential +} from "@azure/storage-blob"; +import axios from "axios"; +import * as assert from "assert"; + +import { BLOB_API_VERSION } from "../../src/blob/utils/constants"; +import { configLogger } from "../../src/common/Logger"; +import BlobTestServerFactory from "../BlobTestServerFactory"; +import { + EMULATOR_ACCOUNT_KEY, + EMULATOR_ACCOUNT_NAME, + getUniqueName +} from "../testutils"; + +configLogger(false); + +// All DFS requests must carry a signal the router recognises as DFS. +// Blob API leases carry ?comp=lease; DFS operations don't, but some (plain HEAD/DELETE) +// carry no other signal, so we add the DataLake SDK user-agent string. +const dfsAxios = axios.create({ + headers: { "User-Agent": "azsdk-js/storage-file-datalake" } +}); + +describe("DfsProxy", () => { + const factory = new BlobTestServerFactory(); + const blobServer = factory.createServer(false, true, false, undefined, true); + + const blobServiceClient = new BlobServiceClient( + `http://${blobServer.config.host}:${blobServer.config.port}/${EMULATOR_ACCOUNT_NAME}`, + newPipeline(new StorageSharedKeyCredential(EMULATOR_ACCOUNT_NAME, EMULATOR_ACCOUNT_KEY), { + retryOptions: { maxTries: 1 }, + keepAliveOptions: { enable: false } + }) + ); + + const sas = generateAccountSASQueryParameters( + { + expiresOn: new Date(Date.now() + 60 * 60 * 1000), + startsOn: new Date(Date.now() - 10 * 60 * 1000), + permissions: AccountSASPermissions.parse("rwdlacupitfx"), + resourceTypes: AccountSASResourceTypes.parse("sco").toString(), + services: AccountSASServices.parse("b").toString(), + protocol: SASProtocol.HttpsAndHttp + }, + new StorageSharedKeyCredential(EMULATOR_ACCOUNT_NAME, EMULATOR_ACCOUNT_KEY) + ).toString(); + + const dfsBaseUrl = `http://${blobServer.config.host}:${blobServer.config.port}/${EMULATOR_ACCOUNT_NAME}`; + + before(async () => { + await blobServer.start(); + }); + + after(async () => { + await blobServer.close(); + await blobServer.clean(); + }); + + it("maps filesystem create and delete to container operations @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const createUrl = `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`; + + const createResponse = await axios.put(createUrl, undefined, { + headers: { + "x-ms-version": BLOB_API_VERSION + }, + validateStatus: () => true + }); + + assert.strictEqual(createResponse.status, 201); + + const created = await blobServiceClient + .getContainerClient(fileSystemName) + .getProperties(); + assert.ok(created.etag); + + const deleteResponse = await dfsAxios.delete(createUrl, { + headers: { + "x-ms-version": BLOB_API_VERSION + }, + validateStatus: () => true + }); + + assert.strictEqual(deleteResponse.status, 202); + + try { + await blobServiceClient.getContainerClient(fileSystemName).getProperties(); + assert.fail("Expected container to be deleted"); + } catch (error) { + assert.strictEqual((error as any).statusCode, 404); + } + }); + + it("maps filesystem HEAD to container properties and returns filesystem header @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const headUrl = `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`; + + const response = await dfsAxios.head(headUrl, { + headers: { + "x-ms-version": BLOB_API_VERSION + }, + validateStatus: () => true + }); + + assert.strictEqual(response.status, 200); + assert.strictEqual(response.headers["x-ms-resource-type"], "filesystem"); + + await containerClient.delete(); + }); + + it("creates and reads a file via DFS path operations @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create a file + const fileName = "test-file.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + + const createResponse = await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(createResponse.status, 201); + + // Verify file exists via blob API + const blobClient = containerClient.getBlobClient(fileName); + const props = await blobClient.getProperties(); + assert.ok(props.etag); + assert.strictEqual(props.contentLength, 0); + + // Get path properties via DFS + const headUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + const headResponse = await dfsAxios.head(headUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(headResponse.status, 200); + assert.strictEqual(headResponse.headers["x-ms-resource-type"], "file"); + + // Delete via DFS + const deleteUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + const deleteResponse = await dfsAxios.delete(deleteUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(deleteResponse.status, 200); + + await containerClient.delete(); + }); + + it("creates a directory with hdi_isfolder metadata @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const dirName = "test-dir"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?resource=directory&${sas}`; + + const createResponse = await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(createResponse.status, 201); + + // Verify it's a directory via DFS HEAD + const headUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?${sas}`; + const headResponse = await dfsAxios.head(headUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(headResponse.status, 200); + assert.strictEqual(headResponse.headers["x-ms-resource-type"], "directory"); + + // Delete directory + const deleteUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?recursive=true&${sas}`; + const deleteResponse = await dfsAxios.delete(deleteUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(deleteResponse.status, 200); + + await containerClient.delete(); + }); + + it("lists paths in a filesystem @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create some files via DFS + for (const name of ["file1.txt", "file2.txt", "dir1"]) { + const resource = name === "dir1" ? "directory" : "file"; + const url = `${dfsBaseUrl}/${fileSystemName}/${name}?resource=${resource}&${sas}`; + await axios.put(url, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + } + + // List paths + const listUrl = `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&recursive=true&${sas}`; + const listResponse = await axios.get(listUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + assert.strictEqual(listResponse.status, 200); + assert.ok(listResponse.data.paths); + assert.ok(listResponse.data.paths.length >= 3); + + const pathNames = listResponse.data.paths.map((p: any) => p.name); + assert.ok(pathNames.includes("file1.txt")); + assert.ok(pathNames.includes("file2.txt")); + assert.ok(pathNames.includes("dir1")); + + // Verify dir1 is marked as directory + const dir1 = listResponse.data.paths.find((p: any) => p.name === "dir1"); + assert.strictEqual(dir1.isDirectory, true); + + await containerClient.delete(); + }); + + it("appends data and flushes to create file content @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "append-test.txt"; + + // Create empty file + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + // Append data + const data1 = "Hello, "; + const data2 = "World!"; + + const append1Url = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=0&${sas}`; + const append1Response = await dfsAxios.patch(append1Url, data1, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "Content-Type": "application/octet-stream" + }, + validateStatus: () => true + }); + assert.strictEqual(append1Response.status, 202); + + const append2Url = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=${Buffer.byteLength(data1)}&${sas}`; + const append2Response = await dfsAxios.patch(append2Url, data2, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "Content-Type": "application/octet-stream" + }, + validateStatus: () => true + }); + assert.strictEqual(append2Response.status, 202); + + // Flush + const totalLength = Buffer.byteLength(data1) + Buffer.byteLength(data2); + const flushUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=flush&position=${totalLength}&${sas}`; + const flushResponse = await dfsAxios.patch(flushUrl, null, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(flushResponse.status, 200); + + // Read back via DFS + const readUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + const readResponse = await dfsAxios.get(readUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(readResponse.status, 200); + assert.strictEqual(readResponse.data, "Hello, World!"); + + await containerClient.delete(); + }); + + it("renames a file via DFS @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create a file + const oldName = "old-file.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${oldName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + // Rename it + const newName = "new-file.txt"; + const renameUrl = `${dfsBaseUrl}/${fileSystemName}/${newName}?${sas}`; + const renameResponse = await axios.put(renameUrl, undefined, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-rename-source": `/${EMULATOR_ACCOUNT_NAME}/${fileSystemName}/${oldName}` + }, + validateStatus: () => true + }); + assert.strictEqual(renameResponse.status, 201); + + // Old path should not exist + const oldHeadUrl = `${dfsBaseUrl}/${fileSystemName}/${oldName}?${sas}`; + const oldHeadResponse = await dfsAxios.head(oldHeadUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(oldHeadResponse.status, 404); + + // New path should exist + const newHeadUrl = `${dfsBaseUrl}/${fileSystemName}/${newName}?${sas}`; + const newHeadResponse = await dfsAxios.head(newHeadUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(newHeadResponse.status, 200); + + await containerClient.delete(); + }); + + it("sets and gets ACLs on a path @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create a file + const fileName = "acl-test.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + // Set ACL + const setAclUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=setAccessControl&${sas}`; + const setAclResponse = await dfsAxios.patch(setAclUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-owner": "testowner", + "x-ms-group": "testgroup", + "x-ms-permissions": "rwxr-x---", + "x-ms-acl": "user::rwx,group::r-x,other::---" + }, + validateStatus: () => true + }); + assert.strictEqual(setAclResponse.status, 200); + + // Get ACL + const getAclUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=getAccessControl&${sas}`; + const getAclResponse = await dfsAxios.head(getAclUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(getAclResponse.status, 200); + assert.strictEqual(getAclResponse.headers["x-ms-owner"], "testowner"); + assert.strictEqual(getAclResponse.headers["x-ms-group"], "testgroup"); + assert.strictEqual(getAclResponse.headers["x-ms-permissions"], "rwxr-x---"); + assert.strictEqual(getAclResponse.headers["x-ms-acl"], "user::rwx,group::r-x,other::---"); + + await containerClient.delete(); + }); + + it("sets filesystem properties via PATCH @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const createUrl = `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + // Set properties + const propValue = Buffer.from("bar").toString("base64"); + const patchUrl = `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`; + const patchResponse = await dfsAxios.patch(patchUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-properties": `foo=${propValue}` + }, + validateStatus: () => true + }); + assert.strictEqual(patchResponse.status, 200); + + // Delete + await dfsAxios.delete(createUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + }); + + it("validates Content-MD5 on append @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "md5-test.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + // Append with correct MD5 + const data = "test data"; + const crypto = require("crypto"); + const correctMD5 = crypto.createHash("md5").update(data).digest("base64"); + + const appendUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=0&${sas}`; + const goodResponse = await dfsAxios.patch(appendUrl, data, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "Content-Type": "application/octet-stream", + "Content-MD5": correctMD5 + }, + validateStatus: () => true + }); + assert.strictEqual(goodResponse.status, 202); + + // Append with wrong MD5 + const appendUrl2 = `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=${Buffer.byteLength(data)}&${sas}`; + const badResponse = await dfsAxios.patch(appendUrl2, "more data", { + headers: { + "x-ms-version": BLOB_API_VERSION, + "Content-Type": "application/octet-stream", + "Content-MD5": "AAAAAAAAAAAAAAAAAAAAAA==" + }, + validateStatus: () => true + }); + assert.strictEqual(badResponse.status, 400); + assert.strictEqual(badResponse.data.error.code, "Md5Mismatch"); + + await containerClient.delete(); + }); + + it("respects If-Match conditional header on getProperties @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "cond-test.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + const createResponse = await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(createResponse.status, 201); + const etag = createResponse.headers["etag"]; + + // Matching ETag should succeed + const headUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + const matchResponse = await dfsAxios.head(headUrl, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "If-Match": etag + }, + validateStatus: () => true + }); + assert.strictEqual(matchResponse.status, 200); + + // Non-matching ETag should fail with 412 + const noMatchResponse = await dfsAxios.head(headUrl, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "If-Match": `"0xDEADBEEF"` + }, + validateStatus: () => true + }); + assert.strictEqual(noMatchResponse.status, 412); + + await containerClient.delete(); + }); + + it("respects If-None-Match conditional header on read @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "cond-read.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + const createResponse = await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + const etag = createResponse.headers["etag"]; + + // Read with non-matching If-None-Match should succeed + const readUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + const readResponse = await dfsAxios.get(readUrl, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "If-None-Match": `"0xDEADBEEF"` + }, + validateStatus: () => true + }); + assert.strictEqual(readResponse.status, 200); + + // Read with matching If-None-Match should return 304 + const notModifiedResponse = await dfsAxios.get(readUrl, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "If-None-Match": etag + }, + validateStatus: () => true + }); + assert.strictEqual(notModifiedResponse.status, 304); + + await containerClient.delete(); + }); + + it("acquires, renews, and releases a lease on a path @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "lease-test.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + const pathUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + + // Acquire lease + const acquireResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "acquire", + "x-ms-lease-duration": "60" + }, + validateStatus: () => true + }); + assert.strictEqual(acquireResponse.status, 201); + const leaseId = acquireResponse.headers["x-ms-lease-id"]; + assert.ok(leaseId); + + // Renew lease + const renewResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "renew", + "x-ms-lease-id": leaseId! + }, + validateStatus: () => true + }); + assert.strictEqual(renewResponse.status, 200); + + // Release lease + const releaseResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "release", + "x-ms-lease-id": leaseId! + }, + validateStatus: () => true + }); + assert.strictEqual(releaseResponse.status, 200); + + await containerClient.delete(); + }); + + it("breaks a lease on a path @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "break-lease.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + const pathUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + + // Acquire lease first + const acquireResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "acquire", + "x-ms-lease-duration": "60" + }, + validateStatus: () => true + }); + assert.strictEqual(acquireResponse.status, 201); + + // Break lease + const breakResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "break" + }, + validateStatus: () => true + }); + assert.strictEqual(breakResponse.status, 202); + + await containerClient.delete(); + }); + + it("changes a lease on a path @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "change-lease.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`; + await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + const pathUrl = `${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`; + + // Acquire lease + const acquireResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "acquire", + "x-ms-lease-duration": "60" + }, + validateStatus: () => true + }); + assert.strictEqual(acquireResponse.status, 201); + const leaseId = acquireResponse.headers["x-ms-lease-id"]; + assert.ok(leaseId); + + // Change lease + const newLeaseId = "d7e6eb60-f905-4b44-a090-123456789012"; + const changeResponse = await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "change", + "x-ms-lease-id": leaseId!, + "x-ms-proposed-lease-id": newLeaseId + }, + validateStatus: () => true + }); + assert.strictEqual(changeResponse.status, 200); + assert.strictEqual(changeResponse.headers["x-ms-lease-id"], newLeaseId); + + // Release with new lease ID + await dfsAxios.post(pathUrl, null, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-lease-action": "release", + "x-ms-lease-id": newLeaseId + }, + validateStatus: () => true + }); + + await containerClient.delete(); + }); + + it("renames a directory and its children atomically @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create a directory with children + const dirName = "src-dir"; + const createDirUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?resource=directory&${sas}`; + await axios.put(createDirUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + + for (const child of ["child1.txt", "child2.txt"]) { + const createFileUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}/${child}?resource=file&${sas}`; + await axios.put(createFileUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + } + + // Rename directory + const newDirName = "dest-dir"; + const renameUrl = `${dfsBaseUrl}/${fileSystemName}/${newDirName}?${sas}`; + const renameResponse = await axios.put(renameUrl, undefined, { + headers: { + "x-ms-version": BLOB_API_VERSION, + "x-ms-rename-source": `/${EMULATOR_ACCOUNT_NAME}/${fileSystemName}/${dirName}` + }, + validateStatus: () => true + }); + assert.strictEqual(renameResponse.status, 201); + + // Verify old dir doesn't exist + const oldHeadUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?${sas}`; + const oldHeadResponse = await dfsAxios.head(oldHeadUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(oldHeadResponse.status, 404); + + // Verify new dir exists + const newHeadUrl = `${dfsBaseUrl}/${fileSystemName}/${newDirName}?${sas}`; + const newHeadResponse = await dfsAxios.head(newHeadUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(newHeadResponse.status, 200); + assert.strictEqual(newHeadResponse.headers["x-ms-resource-type"], "directory"); + + // Verify children were moved + for (const child of ["child1.txt", "child2.txt"]) { + const childUrl = `${dfsBaseUrl}/${fileSystemName}/${newDirName}/${child}?${sas}`; + const childResponse = await dfsAxios.head(childUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(childResponse.status, 200, `Expected ${newDirName}/${child} to exist`); + } + + await containerClient.delete(); + }); + + it("prevents deleting non-empty directory without recursive flag @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create directory with a child file + const dirName = "nonempty-dir"; + await axios.put( + `${dfsBaseUrl}/${fileSystemName}/${dirName}?resource=directory&${sas}`, + undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + await axios.put( + `${dfsBaseUrl}/${fileSystemName}/${dirName}/file.txt?resource=file&${sas}`, + undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + + // Try to delete without recursive — should fail with 409 + const deleteUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?${sas}`; + const deleteResponse = await dfsAxios.delete(deleteUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(deleteResponse.status, 409); + assert.strictEqual(deleteResponse.data.error.code, "DirectoryNotEmpty"); + + // Delete with recursive=true should succeed + const recursiveDeleteUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?recursive=true&${sas}`; + const recursiveDeleteResponse = await dfsAxios.delete(recursiveDeleteUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(recursiveDeleteResponse.status, 200); + + // Verify directory is gone + const headUrl = `${dfsBaseUrl}/${fileSystemName}/${dirName}?${sas}`; + const headResponse = await dfsAxios.head(headUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(headResponse.status, 404); + + await containerClient.delete(); + }); + + it("auto-creates intermediate directories in HNS hierarchy @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Create a deeply nested file — intermediate dirs should be created + const deepPath = "a/b/c/deep-file.txt"; + const createUrl = `${dfsBaseUrl}/${fileSystemName}/${deepPath}?resource=file&${sas}`; + const createResponse = await axios.put(createUrl, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(createResponse.status, 201); + + // Verify intermediate directories exist + for (const dir of ["a", "a/b", "a/b/c"]) { + const headUrl = `${dfsBaseUrl}/${fileSystemName}/${dir}?${sas}`; + const headResponse = await dfsAxios.head(headUrl, { + headers: { "x-ms-version": BLOB_API_VERSION }, + validateStatus: () => true + }); + assert.strictEqual(headResponse.status, 200, `Expected directory ${dir} to exist`); + assert.strictEqual(headResponse.headers["x-ms-resource-type"], "directory"); + } + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // setAccessControlRecursive + // --------------------------------------------------------------------------- + + it("sets ACL recursively on a directory tree with mode=set @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + // Build: dir/ dir/file1.txt dir/subdir/ dir/subdir/file2.txt + for (const [url] of [ + [`${dfsBaseUrl}/${fileSystemName}/dir?resource=directory&${sas}`], + [`${dfsBaseUrl}/${fileSystemName}/dir/file1.txt?resource=file&${sas}`], + [`${dfsBaseUrl}/${fileSystemName}/dir/subdir?resource=directory&${sas}`], + [`${dfsBaseUrl}/${fileSystemName}/dir/subdir/file2.txt?resource=file&${sas}`] + ]) { + await axios.put(url, undefined, { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + } + + const aclUrl = `${dfsBaseUrl}/${fileSystemName}/dir?action=setAccessControlRecursive&mode=set&${sas}`; + const response = await dfsAxios.patch(aclUrl, null, { + headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-acl": "user::rwx,group::r-x,other::---" }, + validateStatus: () => true + }); + + assert.strictEqual(response.status, 200); + assert.strictEqual(response.data.directoriesSuccessful, 2); // dir + subdir + assert.strictEqual(response.data.filesSuccessful, 2); // file1.txt + file2.txt + assert.strictEqual(response.data.failureCount, 0); + + // Verify ACL propagated to a child + const childAcl = await dfsAxios.head( + `${dfsBaseUrl}/${fileSystemName}/dir/subdir/file2.txt?action=getAccessControl&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + assert.strictEqual(childAcl.headers["x-ms-acl"], "user::rwx,group::r-x,other::---"); + + await containerClient.delete(); + }); + + it("modifies ACL recursively with mode=modify @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dir?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dir/file.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Set initial ACL + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/dir?action=setAccessControlRecursive&mode=set&${sas}`, + null, { headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-acl": "user::rwx,group::r-x,other::---" }, validateStatus: () => true }); + + // Modify: override group entry only + const modifyResponse = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/dir?action=setAccessControlRecursive&mode=modify&${sas}`, + null, + { headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-acl": "group::rwx" }, validateStatus: () => true } + ); + assert.strictEqual(modifyResponse.status, 200); + + const check = await dfsAxios.head( + `${dfsBaseUrl}/${fileSystemName}/dir/file.txt?action=getAccessControl&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + // user and other entries preserved, group updated + assert.ok(check.headers["x-ms-acl"].includes("user::rwx")); + assert.ok(check.headers["x-ms-acl"].includes("group::rwx")); + assert.ok(check.headers["x-ms-acl"].includes("other::---")); + + await containerClient.delete(); + }); + + it("removes ACL entries recursively with mode=remove @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dir?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dir/file.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/dir?action=setAccessControlRecursive&mode=set&${sas}`, + null, { headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-acl": "user::rwx,group::r-x,other::---" }, validateStatus: () => true }); + + const removeResponse = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/dir?action=setAccessControlRecursive&mode=remove&${sas}`, + null, + { headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-acl": "group::" }, validateStatus: () => true } + ); + assert.strictEqual(removeResponse.status, 200); + + const check = await dfsAxios.head( + `${dfsBaseUrl}/${fileSystemName}/dir/file.txt?action=getAccessControl&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + // group entry removed, others intact + assert.ok(!check.headers["x-ms-acl"].includes("group::")); + assert.ok(check.headers["x-ms-acl"].includes("user::rwx")); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Append / flush position error paths + // --------------------------------------------------------------------------- + + it("rejects out-of-order append with 409 ConditionNotMet @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "pos-error.txt"; + await axios.put(`${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Correct first append (position=0) + const good = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=0&${sas}`, + "hello", + { headers: { "x-ms-version": BLOB_API_VERSION, "Content-Type": "application/octet-stream" }, validateStatus: () => true } + ); + assert.strictEqual(good.status, 202); + + // Wrong position (should be 5, sending 999) + const bad = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=999&${sas}`, + "world", + { headers: { "x-ms-version": BLOB_API_VERSION, "Content-Type": "application/octet-stream" }, validateStatus: () => true } + ); + assert.strictEqual(bad.status, 409); + assert.strictEqual(bad.data.error.code, "ConditionNotMet"); + + await containerClient.delete(); + }); + + it("rejects flush with wrong position with 409 InvalidFlushPosition @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "flush-error.txt"; + await axios.put(`${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Append 5 bytes correctly + await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=0&${sas}`, + "hello", + { headers: { "x-ms-version": BLOB_API_VERSION, "Content-Type": "application/octet-stream" }, validateStatus: () => true } + ); + + // Flush with wrong position (actual is 5, we say 999) + const bad = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/${fileName}?action=flush&position=999&${sas}`, + null, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + assert.strictEqual(bad.status, 409); + assert.strictEqual(bad.data.error.code, "InvalidFlushPosition"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Multi-cycle append → flush (C-1 regression) + // --------------------------------------------------------------------------- + + it("preserves data across two complete append→flush cycles @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const fileName = "multi-cycle.txt"; + await axios.put(`${dfsBaseUrl}/${fileSystemName}/${fileName}?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const chunk1 = "Hello, "; + const chunk2 = "World!"; + + // First cycle + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=0&${sas}`, + chunk1, { headers: { "x-ms-version": BLOB_API_VERSION, "Content-Type": "application/octet-stream" }, validateStatus: () => true }); + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/${fileName}?action=flush&position=${Buffer.byteLength(chunk1)}&${sas}`, + null, { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Second cycle + const offset = Buffer.byteLength(chunk1); + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/${fileName}?action=append&position=${offset}&${sas}`, + chunk2, { headers: { "x-ms-version": BLOB_API_VERSION, "Content-Type": "application/octet-stream" }, validateStatus: () => true }); + const flush2 = await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/${fileName}?action=flush&position=${offset + Buffer.byteLength(chunk2)}&${sas}`, + null, { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(flush2.status, 200); + + const readRes = await dfsAxios.get(`${dfsBaseUrl}/${fileSystemName}/${fileName}?${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(readRes.status, 200); + assert.strictEqual(readRes.data, "Hello, World!"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Rename to existing destination — overwrite semantics (M-1) + // --------------------------------------------------------------------------- + + it("renames onto an existing file, overwriting it @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/src.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dest.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const renameRes = await axios.put(`${dfsBaseUrl}/${fileSystemName}/dest.txt?${sas}`, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-rename-source": `/${EMULATOR_ACCOUNT_NAME}/${fileSystemName}/src.txt` }, + validateStatus: () => true + }); + assert.strictEqual(renameRes.status, 201, "Rename onto existing file should succeed (overwrite)"); + + // src should be gone + const srcHead = await dfsAxios.head(`${dfsBaseUrl}/${fileSystemName}/src.txt?${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(srcHead.status, 404); + + await containerClient.delete(); + }); + + it("rejects rename onto a non-empty directory with 409 @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/src?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dest?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dest/child.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const renameRes = await axios.put(`${dfsBaseUrl}/${fileSystemName}/dest?${sas}`, undefined, { + headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-rename-source": `/${EMULATOR_ACCOUNT_NAME}/${fileSystemName}/src` }, + validateStatus: () => true + }); + assert.strictEqual(renameRes.status, 409); + assert.strictEqual(renameRes.data.error.code, "DirectoryNotEmpty"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // setProperties — reserved key protection (M-2) + // --------------------------------------------------------------------------- + + it("setProperties silently ignores reserved hdi_isfolder key @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/file.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Attempt to flip hdi_isfolder to "true" via setProperties + const encoded = Buffer.from("true").toString("base64"); + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/file.txt?action=setProperties&${sas}`, null, { + headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-properties": `hdi_isfolder=${encoded}` }, + validateStatus: () => true + }); + + // Path should still be reported as a file, not a directory + const head = await dfsAxios.head(`${dfsBaseUrl}/${fileSystemName}/file.txt?${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(head.headers["x-ms-resource-type"], "file"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // ETag format (M-8) — DFS-created blobs should match Azure "0x..." format + // --------------------------------------------------------------------------- + + it("ETag from DFS path create matches Azure 0x... format @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const createRes = await axios.put(`${dfsBaseUrl}/${fileSystemName}/etag-test.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(createRes.status, 201); + + const etag = createRes.headers["etag"]; + assert.ok(etag, "ETag header should be present"); + assert.match(etag, /^"0x[0-9A-F]+"$/i, `ETag "${etag}" does not match Azure "0x..." format`); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Non-numeric position parameter (m-5) + // --------------------------------------------------------------------------- + + it("rejects non-numeric append position gracefully @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/pos-nan.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const res = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/pos-nan.txt?action=append&position=garbage&${sas}`, + "data", + { headers: { "x-ms-version": BLOB_API_VERSION, "Content-Type": "application/octet-stream" }, validateStatus: () => true } + ); + // NaN position is treated as 0; an empty file expects position 0, so this succeeds + // The important thing is it doesn't crash (500) — either 202 or 409 is acceptable + assert.ok(res.status === 202 || res.status === 409, `Expected 202 or 409, got ${res.status}`); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-2: HNS flag survives setProperties PATCH (P2-C-1) + // --------------------------------------------------------------------------- + + it("HNS flag survives a filesystem setProperties PATCH @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + await axios.put(`${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Patch with a user property + const propVal = Buffer.from("bar").toString("base64"); + const patchRes = await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`, null, { + headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-properties": `foo=${propVal}` }, + validateStatus: () => true + }); + assert.strictEqual(patchRes.status, 200); + + // HNS should still be enabled + const headRes = await dfsAxios.head(`${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(headRes.headers["x-ms-namespace-enabled"], "true"); + + // DFS path operation should still work + const createRes = await axios.put(`${dfsBaseUrl}/${fileSystemName}/test.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(createRes.status, 201); + + await dfsAxios.delete(`${dfsBaseUrl}/${fileSystemName}?resource=filesystem&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + }); + + // --------------------------------------------------------------------------- + // Pass-2: listPaths returns 404 for non-existent directory (P2-M-1) + // --------------------------------------------------------------------------- + + it("listPaths returns 404 when the specified directory does not exist @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + const res = await dfsAxios.get( + `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&directory=nonexistent&recursive=true&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + assert.strictEqual(res.status, 404); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-2: delete with non-matching If-Match returns 412 (P2-M-2) + // --------------------------------------------------------------------------- + + it("delete with non-matching If-Match returns 412 @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/cond.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const delRes = await dfsAxios.delete(`${dfsBaseUrl}/${fileSystemName}/cond.txt?${sas}`, { + headers: { "x-ms-version": BLOB_API_VERSION, "If-Match": `"0xDEADBEEF"` }, + validateStatus: () => true + }); + assert.strictEqual(delRes.status, 412); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-2: listPaths reflects stored ACL owner/group/permissions (P2-M-7) + // --------------------------------------------------------------------------- + + it("listPaths returns stored ACL owner and group for each path @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/acl-file.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + // Set a specific owner + await dfsAxios.patch(`${dfsBaseUrl}/${fileSystemName}/acl-file.txt?action=setAccessControl&${sas}`, null, { + headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-owner": "custom-owner", "x-ms-group": "custom-group" }, + validateStatus: () => true + }); + + const listRes = await dfsAxios.get( + `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&recursive=true&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + assert.strictEqual(listRes.status, 200); + const entry = listRes.data.paths.find((p: any) => p.name === "acl-file.txt"); + assert.ok(entry, "Expected acl-file.txt in listing"); + assert.strictEqual(entry.owner, "custom-owner"); + assert.strictEqual(entry.group, "custom-group"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-3: GET on a directory path returns 400 PathIsDirectory (P3-M-5) + // --------------------------------------------------------------------------- + + it("GET on a directory path returns 400 PathIsDirectory @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/mydir?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const res = await dfsAxios.get(`${dfsBaseUrl}/${fileSystemName}/mydir?${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + assert.strictEqual(res.status, 400); + assert.strictEqual(res.data.error.code, "PathIsDirectory"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-3: setAccessControlRecursive with invalid mode returns 400 (P3-M-4) + // --------------------------------------------------------------------------- + + it("setAccessControlRecursive with invalid mode returns 400 @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/dir?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const res = await dfsAxios.patch( + `${dfsBaseUrl}/${fileSystemName}/dir?action=setAccessControlRecursive&mode=invalid&${sas}`, + null, + { headers: { "x-ms-version": BLOB_API_VERSION, "x-ms-acl": "user::rwx" }, validateStatus: () => true } + ); + assert.strictEqual(res.status, 400); + assert.strictEqual(res.data.error.code, "InvalidQueryParameterValue"); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-3: listPaths non-recursive — subdirectory entries include eTag (P3-M-2) + // --------------------------------------------------------------------------- + + it("listPaths non-recursive includes eTag for subdirectory entries @loki @sql", async () => { + const fileSystemName = getUniqueName("fs"); + const containerClient = blobServiceClient.getContainerClient(fileSystemName); + await containerClient.create(); + + await axios.put(`${dfsBaseUrl}/${fileSystemName}/subdir?resource=directory&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + await axios.put(`${dfsBaseUrl}/${fileSystemName}/subdir/file.txt?resource=file&${sas}`, undefined, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true }); + + const listRes = await dfsAxios.get( + `${dfsBaseUrl}/${fileSystemName}?resource=filesystem&recursive=false&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + assert.strictEqual(listRes.status, 200); + const dirEntry = listRes.data.paths.find((p: any) => p.name === "subdir"); + assert.ok(dirEntry, "Expected subdir in listing"); + assert.ok(dirEntry.eTag, `Expected eTag on subdir entry, got: ${JSON.stringify(dirEntry)}`); + + await containerClient.delete(); + }); + + // --------------------------------------------------------------------------- + // Pass-3: FilesystemHandler.list with non-numeric maxResults (P3-m-4) + // --------------------------------------------------------------------------- + + it("filesystem list with non-numeric maxResults does not crash @loki @sql", async () => { + const res = await dfsAxios.get( + `${dfsBaseUrl}?resource=account&maxResults=abc&${sas}`, + { headers: { "x-ms-version": BLOB_API_VERSION }, validateStatus: () => true } + ); + assert.ok(res.status === 200 || res.status === 400, `Expected 200 or 400, got ${res.status}`); + }); +}); diff --git a/tests/blob/dfsSDKIntegration.test.ts b/tests/blob/dfsSDKIntegration.test.ts new file mode 100644 index 000000000..f4b49062d --- /dev/null +++ b/tests/blob/dfsSDKIntegration.test.ts @@ -0,0 +1,467 @@ +/** + * SDK Integration Tests for ADLS Gen2 (DFS) endpoint. + * + * Uses @azure/storage-file-datalake SDK to validate that the Azurite DFS + * endpoint is compatible with the official Azure DataLake SDK. + * + * Wiki requirement: "Pass all language SDK tests" — this covers the JS SDK. + */ + +import { + DataLakeServiceClient, + DataLakeFileSystemClient, + StorageSharedKeyCredential +} from "@azure/storage-file-datalake"; +import * as assert from "assert"; + +import { configLogger } from "../../src/common/Logger"; +import BlobTestServerFactory from "../BlobTestServerFactory"; +import { + EMULATOR_ACCOUNT_KEY, + EMULATOR_ACCOUNT_NAME, + getUniqueName +} from "../testutils"; + +configLogger(false); + +describe("DFS SDK Integration (@azure/storage-file-datalake)", () => { + const factory = new BlobTestServerFactory(); + const blobServer = factory.createServer(false, true, false, undefined, true); + + const sharedKeyCredential = new StorageSharedKeyCredential( + EMULATOR_ACCOUNT_NAME, + EMULATOR_ACCOUNT_KEY + ); + + const serviceClient = new DataLakeServiceClient( + `http://${blobServer.config.host}:${blobServer.config.port}/${EMULATOR_ACCOUNT_NAME}`, + sharedKeyCredential + ); + + before(async () => { + await blobServer.start(); + }); + + after(async () => { + await blobServer.close(); + await blobServer.clean(); + }); + + // --------------------------------------------------------------------------- + // Filesystem operations + // --------------------------------------------------------------------------- + + describe("Filesystem operations", () => { + it("creates and deletes a filesystem @loki @sql", async () => { + const fsName = getUniqueName("sdkfs"); + const fsClient = serviceClient.getFileSystemClient(fsName); + + const createResponse = await fsClient.create(); + assert.strictEqual(createResponse._response.status, 201); + + const deleteResponse = await fsClient.delete(); + assert.strictEqual(deleteResponse._response.status, 202); + }); + + it("gets filesystem properties @loki @sql", async () => { + const fsName = getUniqueName("sdkfs"); + const fsClient = serviceClient.getFileSystemClient(fsName); + await fsClient.create(); + + const props = await fsClient.getProperties(); + assert.ok(props.etag); + assert.ok(props.lastModified); + + await fsClient.delete(); + }); + + it("lists filesystems @loki @sql", async () => { + const fsName = getUniqueName("sdkfs"); + const fsClient = serviceClient.getFileSystemClient(fsName); + await fsClient.create(); + + const filesystems: string[] = []; + for await (const fs of serviceClient.listFileSystems()) { + filesystems.push(fs.name); + } + assert.ok(filesystems.includes(fsName), `Expected ${fsName} in filesystem list`); + + await fsClient.delete(); + }); + }); + + // --------------------------------------------------------------------------- + // Directory operations + // --------------------------------------------------------------------------- + + describe("Directory operations", () => { + let fsClient: DataLakeFileSystemClient; + + beforeEach(async () => { + fsClient = serviceClient.getFileSystemClient(getUniqueName("sdkfs")); + await fsClient.create(); + }); + + afterEach(async () => { + await fsClient.delete(); + }); + + it("creates and deletes a directory @loki @sql", async () => { + const dirClient = fsClient.getDirectoryClient("test-dir"); + const createResponse = await dirClient.create(); + assert.strictEqual(createResponse._response.status, 201); + + const props = await dirClient.getProperties(); + assert.ok(props.etag); + + await dirClient.delete(); + }); + + it("creates nested directories @loki @sql", async () => { + const dirClient = fsClient.getDirectoryClient("parent/child/grandchild"); + await dirClient.create(); + + // Verify all intermediate dirs exist + const parentProps = await fsClient.getDirectoryClient("parent").getProperties(); + assert.ok(parentProps.etag); + + const childProps = await fsClient.getDirectoryClient("parent/child").getProperties(); + assert.ok(childProps.etag); + + const grandchildProps = await dirClient.getProperties(); + assert.ok(grandchildProps.etag); + + await fsClient.getDirectoryClient("parent").delete(true); + }); + + it("moves (renames) a directory @loki @sql", async () => { + const srcDir = fsClient.getDirectoryClient("src-dir"); + await srcDir.create(); + + // Create a file inside + const fileClient = srcDir.getFileClient("file.txt"); + await fileClient.create(); + + // Move (rename) directory + await srcDir.move("dest-dir"); + + // Verify new path exists + const destProps = await fsClient.getDirectoryClient("dest-dir").getProperties(); + assert.ok(destProps.etag); + + // Verify old path doesn't exist + try { + await fsClient.getDirectoryClient("src-dir").getProperties(); + assert.fail("Expected 404 for old directory"); + } catch (error: any) { + assert.strictEqual(error.statusCode, 404); + } + + await fsClient.getDirectoryClient("dest-dir").delete(true); + }); + }); + + // --------------------------------------------------------------------------- + // File operations + // --------------------------------------------------------------------------- + + describe("File operations", () => { + let fsClient: DataLakeFileSystemClient; + + beforeEach(async () => { + fsClient = serviceClient.getFileSystemClient(getUniqueName("sdkfs")); + await fsClient.create(); + }); + + afterEach(async () => { + await fsClient.delete(); + }); + + it("creates an empty file @loki @sql", async () => { + const fileClient = fsClient.getFileClient("empty-file.txt"); + const createResponse = await fileClient.create(); + assert.strictEqual(createResponse._response.status, 201); + + const props = await fileClient.getProperties(); + assert.ok(props.etag); + assert.strictEqual(props.contentLength, 0); + + await fileClient.delete(); + }); + + it("appends and flushes data, then reads it back @loki @sql", async () => { + const fileClient = fsClient.getFileClient("data-file.txt"); + await fileClient.create(); + + const content = "Hello from the DataLake SDK!"; + const buffer = Buffer.from(content); + + // Append + flush + await fileClient.append(buffer, 0, buffer.length); + await fileClient.flush(buffer.length); + + // Read back + const downloadResponse = await fileClient.read(); + const downloaded = await streamToString(downloadResponse.readableStreamBody!); + assert.strictEqual(downloaded, content); + + await fileClient.delete(); + }); + + it("writes multi-chunk file and reads back @loki @sql", async () => { + const fileClient = fsClient.getFileClient("multi-chunk.txt"); + await fileClient.create(); + + const chunk1 = Buffer.from("First chunk. "); + const chunk2 = Buffer.from("Second chunk. "); + const chunk3 = Buffer.from("Third chunk."); + + await fileClient.append(chunk1, 0, chunk1.length); + await fileClient.append(chunk2, chunk1.length, chunk2.length); + await fileClient.append(chunk3, chunk1.length + chunk2.length, chunk3.length); + await fileClient.flush(chunk1.length + chunk2.length + chunk3.length); + + const downloadResponse = await fileClient.read(); + const downloaded = await streamToString(downloadResponse.readableStreamBody!); + assert.strictEqual(downloaded, "First chunk. Second chunk. Third chunk."); + + await fileClient.delete(); + }); + + it("deletes a file @loki @sql", async () => { + const fileClient = fsClient.getFileClient("to-delete.txt"); + await fileClient.create(); + + await fileClient.delete(); + + try { + await fileClient.getProperties(); + assert.fail("Expected 404 after delete"); + } catch (error: any) { + assert.strictEqual(error.statusCode, 404); + } + }); + + it("moves (renames) a file @loki @sql", async () => { + const fileClient = fsClient.getFileClient("original.txt"); + await fileClient.create(); + + await fileClient.move("renamed.txt"); + + const renamedProps = await fsClient.getFileClient("renamed.txt").getProperties(); + assert.ok(renamedProps.etag); + + try { + await fsClient.getFileClient("original.txt").getProperties(); + assert.fail("Expected 404 for old file"); + } catch (error: any) { + assert.strictEqual(error.statusCode, 404); + } + + await fsClient.getFileClient("renamed.txt").delete(); + }); + }); + + // --------------------------------------------------------------------------- + // ACL operations + // --------------------------------------------------------------------------- + + describe("ACL operations", () => { + let fsClient: DataLakeFileSystemClient; + + beforeEach(async () => { + fsClient = serviceClient.getFileSystemClient(getUniqueName("sdkfs")); + await fsClient.create(); + }); + + afterEach(async () => { + await fsClient.delete(); + }); + + it("sets and gets access control on a file @loki @sql", async () => { + const fileClient = fsClient.getFileClient("acl-file.txt"); + await fileClient.create(); + + await fileClient.setAccessControl( + [ + { accessControlType: "user", defaultScope: false, entityId: "", permissions: { read: true, write: true, execute: true } }, + { accessControlType: "group", defaultScope: false, entityId: "", permissions: { read: true, write: false, execute: true } }, + { accessControlType: "other", defaultScope: false, entityId: "", permissions: { read: false, write: false, execute: false } } + ] + ); + + const acl = await fileClient.getAccessControl(); + assert.ok(acl.owner); + assert.ok(acl.group); + assert.ok(acl.permissions); + + await fileClient.delete(); + }); + + it("sets permissions on a directory @loki @sql", async () => { + const dirClient = fsClient.getDirectoryClient("acl-dir"); + await dirClient.create(); + + await dirClient.setPermissions({ + owner: { read: true, write: true, execute: true }, + group: { read: true, write: false, execute: true }, + other: { read: false, write: false, execute: false }, + stickyBit: false, + extendedAcls: false + }); + + const acl = await dirClient.getAccessControl(); + assert.ok(acl.permissions); + + await dirClient.delete(); + }); + }); + + // --------------------------------------------------------------------------- + // List paths + // --------------------------------------------------------------------------- + + describe("List paths", () => { + let fsClient: DataLakeFileSystemClient; + + beforeEach(async () => { + fsClient = serviceClient.getFileSystemClient(getUniqueName("sdkfs")); + await fsClient.create(); + }); + + afterEach(async () => { + await fsClient.delete(); + }); + + it("lists paths recursively @loki @sql", async () => { + await fsClient.getDirectoryClient("dir1").create(); + await fsClient.getFileClient("dir1/file1.txt").create(); + await fsClient.getFileClient("dir1/file2.txt").create(); + await fsClient.getFileClient("root-file.txt").create(); + + const paths: string[] = []; + for await (const path of fsClient.listPaths({ recursive: true })) { + paths.push(path.name!); + } + + assert.ok(paths.includes("dir1"), "Expected dir1 in path list"); + assert.ok(paths.includes("dir1/file1.txt"), "Expected dir1/file1.txt"); + assert.ok(paths.includes("dir1/file2.txt"), "Expected dir1/file2.txt"); + assert.ok(paths.includes("root-file.txt"), "Expected root-file.txt"); + }); + + it("lists paths non-recursively (directory level) @loki @sql", async () => { + await fsClient.getDirectoryClient("dir-a").create(); + await fsClient.getFileClient("dir-a/nested.txt").create(); + await fsClient.getFileClient("top-level.txt").create(); + + const paths: string[] = []; + for await (const path of fsClient.listPaths({ recursive: false })) { + paths.push(path.name!); + } + + assert.ok(paths.includes("dir-a"), "Expected dir-a in non-recursive list"); + assert.ok(paths.includes("top-level.txt"), "Expected top-level.txt"); + // nested file should NOT appear at top level + assert.ok(!paths.includes("dir-a/nested.txt"), "dir-a/nested.txt should not appear in non-recursive list"); + }); + }); + + // --------------------------------------------------------------------------- + // Lease operations via SDK + // --------------------------------------------------------------------------- + + describe("Lease operations", () => { + let fsClient: DataLakeFileSystemClient; + + beforeEach(async () => { + fsClient = serviceClient.getFileSystemClient(getUniqueName("sdkfs")); + await fsClient.create(); + }); + + afterEach(async () => { + await fsClient.delete(); + }); + + it("checks lease state on a file @loki @sql", async () => { + const fileClient = fsClient.getFileClient("lease-file.txt"); + await fileClient.create(); + + const props = await fileClient.getProperties(); + assert.strictEqual(props.leaseState, "available"); + + await fileClient.delete(); + }); + }); + + // --------------------------------------------------------------------------- + // Cross-API compatibility + // --------------------------------------------------------------------------- + + describe("Cross-API compatibility", () => { + let fsClient: DataLakeFileSystemClient; + let fsName: string; + + beforeEach(async () => { + fsName = getUniqueName("sdkfs"); + fsClient = serviceClient.getFileSystemClient(fsName); + await fsClient.create(); + }); + + afterEach(async () => { + await fsClient.delete(); + }); + + it("file created via DFS is visible via Blob API @loki @sql", async () => { + const fileClient = fsClient.getFileClient("cross-api-file.txt"); + await fileClient.create(); + + // Append and flush content + const content = Buffer.from("cross-api content"); + await fileClient.append(content, 0, content.length); + await fileClient.flush(content.length); + + // Read via Blob API + const { BlobServiceClient, StorageSharedKeyCredential: BlobCredential } = await import("@azure/storage-blob"); + const blobServiceClient = new BlobServiceClient( + `http://127.0.0.1:${blobServer.config.port}/${EMULATOR_ACCOUNT_NAME}`, + new BlobCredential(EMULATOR_ACCOUNT_NAME, EMULATOR_ACCOUNT_KEY) + ); + const containerClient = blobServiceClient.getContainerClient(fsName); + const blobClient = containerClient.getBlobClient("cross-api-file.txt"); + const downloadResponse = await blobClient.download(); + const downloaded = await streamToString(downloadResponse.readableStreamBody!); + assert.strictEqual(downloaded, "cross-api content"); + }); + + it("blob created via Blob API is visible via DFS @loki @sql", async () => { + const { BlobServiceClient, StorageSharedKeyCredential: BlobCredential } = await import("@azure/storage-blob"); + const blobServiceClient = new BlobServiceClient( + `http://127.0.0.1:${blobServer.config.port}/${EMULATOR_ACCOUNT_NAME}`, + new BlobCredential(EMULATOR_ACCOUNT_NAME, EMULATOR_ACCOUNT_KEY) + ); + const containerClient = blobServiceClient.getContainerClient(fsName); + + // Upload blob via Blob API + const content = "blob-api content"; + const blockBlobClient = containerClient.getBlockBlobClient("blob-created.txt"); + await blockBlobClient.upload(content, content.length); + + // Read via DFS + const fileClient = fsClient.getFileClient("blob-created.txt"); + const readResponse = await fileClient.read(); + const downloaded = await streamToString(readResponse.readableStreamBody!); + assert.strictEqual(downloaded, "blob-api content"); + }); + }); +}); + +// Helper to convert a readable stream to string +async function streamToString(stream: NodeJS.ReadableStream): Promise { + const chunks: Buffer[] = []; + return new Promise((resolve, reject) => { + stream.on("data", (chunk: Buffer) => chunks.push(chunk)); + stream.on("end", () => resolve(Buffer.concat(chunks).toString("utf8"))); + stream.on("error", reject); + }); +}