-
Notifications
You must be signed in to change notification settings - Fork 159
Shared Multi Step Form for Object Storage Connectors #8467
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Have migrated the fix from #8354 |
royendo
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks good from Product QA.
|
Hey Cyrus, A while back we wrote a design doc for connector forms that aligned on using JSON Schema as the form definition format: https://www.notion.so/rilldata/Snippets-API-design-doc-1f5ba33c8f5780798a0cfc819a406b42 The idea is that these schemas eventually live on the backend and can be used by both UI and AI/CLI. Defining a config-driven form system that works across multiple connectors is definitely the right direction! But the custom TypeScript format ( Here's a rough sketch of what S3 might look like: {
"$schema": "http://json-schema.org/draft-07/schema#",
"type": "object",
"properties": {
"auth_method": {
"type": "string",
"title": "Authentication method",
"enum": ["access_keys", "public"],
"default": "access_keys",
"x-display": "radio"
},
"aws_access_key_id": {
"type": "string",
"title": "Access Key ID",
"description": "AWS access key ID for the bucket",
"x-secret": true,
"x-step": "connector",
"x-visible-if": { "auth_method": "access_keys" }
},
"aws_secret_access_key": {
"type": "string",
"title": "Secret Access Key",
"description": "AWS secret access key for the bucket",
"x-secret": true,
"x-step": "connector",
"x-visible-if": { "auth_method": "access_keys" }
},
"path": {
"type": "string",
"title": "S3 URI",
"pattern": "^s3://",
"x-step": "source"
},
"name": {
"type": "string",
"title": "Model name",
"pattern": "^[a-zA-Z_][a-zA-Z0-9_]*$",
"x-step": "source"
}
},
"allOf": [
{
"if": { "properties": { "auth_method": { "const": "access_keys" } } },
"then": { "required": ["aws_access_key_id", "aws_secret_access_key"] }
}
]
}This is illustrative—you'll need to think through the right JSON Schema properties. Try to stay as close to native JSON Schema as possible before reaching for custom
There may be better ways to model some of these. Let me know if you want to talk through it. Developed in collaboration with Claude Code |
ericpgreen2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the comment above. A declarative config language to define these forms is definitely the right direction! It's so close to the JSON Schema approach we had discussed previously. This seems like a good time to take your code here one step closer to that design doc.
Added the json schema changes in 83bfcdf |
|
Nice work adopting JSON Schema—this is the right direction. One structural concern: the current flow is A cleaner approach would be a Keep the renderer generic—use slots for context-specific UI outside the form fields themselves: the "Already connected? Import your data" link, connector-specific error messages, etc. That way the same renderer can be reused for non-connector templates later. For incremental migration, isolate the new pattern in its own top-level feature directory to align with the design doc: This keeps the new approach cleanly separated and positions it as a general pattern for template-based code generation, not just source connectors. Optionally, you could also extract the file generation logic from // Today: implemented client-side
// Tomorrow: calls RuntimeService.GenerateFile
async function generateFile(args: {
template: string;
properties: Record<string, unknown>;
preview?: boolean;
}): Promise<{
addedPaths: string[];
modifiedPaths: string[];
previews: Record<string, string>; // path → YAML
}>This would make the backend migration a simple swap—same interface, different implementation. Let me know if you want to talk through any of this. Developed in collaboration with Claude Code |
ericpgreen2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See above feedback on the JSON Schema implementation
|
Good progress on the One more structural issue: the renderer still has connector-specific logic baked in:
This makes it a "connector form renderer" rather than a generic "JSON Schema form renderer." The principle: The renderer should be a pure function of the schema. It shouldn't know about connectors, sources, or auth methods. It should just:
The auth method pattern (radio with nested fields) is just one instance of a general pattern: "enum field with conditionally visible related fields." The renderer should handle that generically. Connector-specific logic belongs in a parent component. Some naming options:
This parent component handles:
The renderer just renders fields; the parent orchestrates the workflow. Directory structure: The The generic rendering infrastructure lives in Let me know if you want to talk through any of this. Developed in collaboration with Claude Code |
ericpgreen2
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Getting closer! Please see feedback above.
This reverts commit 99f076d.
I've added the changes for the directory structure. |
I've made some attempts to address this but some regressions are expected along the way. This will require some time to fully migrate to schema-driven design. |
|
nit: lets not remove any of the required keys from s3.go IE aws_role_arn since this might be in use. |

This PR implements a shared, config-driven multi-step form architecture for object storage connectors (GCS, S3,
Azure). It replaces a GCS-specific implementation with a generalized system that splits connector configuration
into two steps:
Checklist: