Skip to content

Releases: DataLinkDC/dinky

Dinky v1.2.0-rc4

24 Nov 16:35
1ac0adf
Compare
Choose a tag to compare
Dinky v1.2.0-rc4 Pre-release
Pre-release

New features

  • Added domestic npm mirror sources (supported by profile mode)
  • Support Flink-1.20
  • Support FlinkCDC 3.2.0
  • Added built-in flink history server, which greatly reduces the problem of incorrect task status acquisition
  • Added some page prompts for roles and permissions
  • Added loading effects when displaying blood relationship acquisition
  • Implemented job import and export
  • Support mock test flinksql and CDCSOURCE task
  • FlinkSQL Studio supports real-time update task status
  • Add a boot initialization page for the first deployment

Fixes

  • Fix the problem that show databases cannot be executed normally
  • Fix some json serialization problems
  • Fix the problem of SQL Cli under Flink1.19
  • Fix the problem that the task port is unavailable in k8s mode
  • Fix some problems in the git project function (default value, drag and drop sorting)
  • Fix the SavePoint path logic in kubernetes-operator mode and adjust the configuration method of Flink configuration acquisition
  • Fix the problem that the page is unresponsive for a long time when too many tasks are opened on the data development page
  • Fix git Problems in the project build process
  • Fix the thumbnail display problem in the code editor
  • Fix the sql automatic initialization problem of pg
  • Fix the problem of submitting tasks in local mode
  • Fix the data type conversion problem when synchronizing the entire database of Oracle
  • Fix the exception caused by no instance when clicking on the workbench
  • Fix the syntax error problem when querying data in postgres
  • Fix the problem that flyway cannot support mysql5.7
  • Fix the problem of incorrect primary key column acquisition in Oracle data source type
  • Fix the problem that the task list sorting does not take effect
  • Fix the problem of repeated refresh of Git project page
  • Fix the null pointer exception that may exist in DingTalk alarm
  • Fix the problem of proxy Flink address
  • Fix the problem that the table structure in the catalog cannot be displayed normally

Optimization

  • Optimize the support of https protocol when obtaining task details
  • Optimize the workbench page layout
  • Delete the unique index of the dinky_cluster table to solve the problem in yarn/k8s Unique index conflict problem when high availability is enabled
  • Optimize some Mapper queries
  • Optimize the type of declared attributes of the git project backend class
  • Delete the prompt text in UDF registration management
  • Optimize some page layouts to make them more user-friendly on small screens
  • Optimize the version update logic to solve the cache problem caused by the upgrade (automatic version comparison implementation)
  • Optimize the virtual scrolling failure to take effect when there are too many nodes in the data source details list tree, resulting in slow page rendering
  • Optimize the login page to solve the problem of excessive resource jams
  • Optimize the application startup speed
  • Optimize the cluster started from the cluster configuration to manual registration
  • Optimize the display layout of the system configuration description that is too long
  • Optimize the overall layout and rendering efficiency of the flink task operator graph in the operation and maintenance center

Refactoring

  • Refactor the user.dir acquisition method to avoid the problem of incorrectly obtaining the project root path in different deployment environments
  • Refactor SSE to websocket
  • Refactor the request method for obtaining task monitoring data
  • Remove hutool-json to improve data conversion efficiency

Documentation

  • Added documentation for integrating Dinky into Datasophon
  • Added documentation for using SQL Cli
  • Optimized the document footer
  • Added documentation for role binding tenants

CI/CD

  • Added optional version items for the github bug template
  • Removed the domestic warehouse proxy configuration when deploying documents

v1.1.0

30 Jul 14:03
2a13c50
Compare
Choose a tag to compare

Dinky-1.1.0 Release Note

incompatible changes

  • v1.1.0 supports the automatic schema upgrade framework (Flyway), using the table structure/data up to v1.0.2 as the default base version. If your version is not at v1.0.2+, you must first upgrade to the table structure of v1.0.2 according to the official upgrade tutorial. If your version is v1.0.2+, you can directly upgrade, and the program will automatically execute without affecting historical data. If you are deploying from scratch, please ignore this matter.

  • Due to the contribution of flink-cdc to the Apache Foundation, the package name of the new version will change, and it is not possible to make compatibility changes. In versions dinky-v1.1.0 and above, dinky will use new package name dependencies, which requires your flink-cdc dependencies to be upgraded to flink-cdc v3.1+, otherwise it will not work.

  • Remove the distinction of Scala version when packaging, only develop with Scala-2.12, and no longer support Scala-1.11.x.

New Features

  • Added Flyway schema upgrade framework.
  • Task directory supports flexible sorting.
  • Implemented task-level permission control and supports different permission control strategies.
  • Optimized the automatic addition of administrator user association when adding tenants.
  • Added the function to directly kill the process in the case of task submission deadlock.
  • Support k8s deployment of dinky.
  • Implement data preview.
  • New support for UDF injection configuration in data development.
  • Added whole library synchronization function (cdcsource) sink end table name mapping, regular matching modification mapping.
  • Added Dashboard page.
  • Added Paimon data source type.
  • Added SQL-Cli.

Fixes

  • Modified the issue with k8s's account.name value and added the problem of Conf initialization when deleting a cluster.
  • Fixed the issue of flink-cdc losing SQL in application mode.
  • Fixed the issue where the task creation time was not reset when copying tasks.
  • Fixed the task list positioning problem.
  • Solved the problem of user-defined classes in user Jars not being compiled when submitting Jar tasks.
  • Fixed the incorrect alarm information in the enterprise WeChat-app mode.
  • Fixed the problem of flink-1.19 not being able to submit tasks.
  • Fixed the startup script not supporting jdk11.
  • Fixed the problem of cluster instances not being deleted.
  • Fixed the problem of UDF not finding the class in Flink SQL tasks.
  • Fixed the problem of the data development page not updating the state when the size changes.
  • Fixed the problem of not being able to get the latest high availability address defined in custom configuration.
  • Fixed the problem of not recognizing the manual configuration of rest.address and rest.port.

Optimizations

  • Optimized the prompt words in resource configuration.
  • Optimized the DDL generation logic of the MySQL data source type.
  • Optimized some front-end dependencies and front-end prompt information.
  • Optimized the copy path function of the resource center, supporting multiple application scenarios within dinky.
  • Optimized the monitoring function, using the monitoring function switch in dinky's configuration center to control all monitoring within dinky.
  • Optimized some front-end judgment logic.

Restructuring

  • Moved the alarm rules to the alarm route under the registration center.
  • Removed paimon as the storage monitoring medium, changed to sqllite, and do not strongly depend on the hadoop-uber package (except in the Hadoop environment), and support periodic cleaning.
  • Restructured the monitoring page, removing some built-in service monitoring.

Documentation

  • Added documentation for deploying dinky on k8s.
  • Optimized the Docker deployment documentation.
  • Added documentation related to whole library synchronization function (cdcsource) sink end table name mapping.

v1.0.3

05 Jun 07:21
9a81a38
Compare
Choose a tag to compare

Dinky-1.0.3 Release Note

Upgrade Instructions

1.0.3 is a bug fix version, no table structure changes, no additional SQL scripts need to be executed during upgrade, just overwrite and install, pay attention to the modification of configuration files and the placement of dependencies

About SCALA version: The release uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself, please refer to Compile and Deploy, change the profile scala-2.12 to scala-2.11

New Features

  • Added the function of manually killing the process after the task is stuck during operation

Fixes

  • Fix the problem that the Yarn Application mode cannot execute tasks in Flink 1.19 support
  • Fix the problem of start and stop scripts, adapt to the GC parameters of jdk 11
  • Fix UDF The class cannot be found after publishing
  • Fixed the priority problem that the set function cannot cover in the Application task SQL

Optimization

  • Optimize the Dinky service to cause the CPU load to be too high and the thread not to be released during monitoring
  • Optimize the Dinky monitoring configuration, according to the Configuration Center->Global Configuration->Metrics Configuration->**Dinky JVM Monitor Switch** switch to control whether to enable Flink task monitoring
  • Optimize the data type conversion logic of Oracle whole database synchronization
  • Optimize the front-end rendering performance and display effect of monitoring data

v1.0.2

06 May 14:19
b7c82ed
Compare
Choose a tag to compare

Dinky-1.0.2 Release Note

Upgrade Instructions

  • 1.0.2 is a BUG repair version with table structure/data changes, please execute DINKY_HOME/sql/upgrade/1.0.2_schema/data source type/dinky_dml.sql

About SCALA version: The release version uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself.
Please refer to Compile Deployment and change the scala-2.12 in the profile. for scala-2.11

New Feature

  • Adapt to various Rest SvcTypes in KubernetsApplicationOperator mode and modify JobId to obtain judgment logic
  • Added SSE heartbeat mechanism
  • Added the function of automatically retrieving the latest highly available JobManager address (currently implemented in Yarn; not yet implemented in K8s)
  • Added the function of clearing logs in the console during data development
  • Support Flink1.19
  • Add task group related configuration when pushing to Apache DolphinScheduler
  • Added a user designated by the user to submit YarnApplication tasks
  • The startup script adds GC related startup parameters and supports configuring the DINKY_HOME environment variable
  • Implement FlinkSQL configuration item in cluster configuration to support RS protocol (Yarn mode only)

Fix

  • Fixed the problem of global variables not being recognized in YarnApplication mode, and reconstructed the YarnApplication submission method
  • Fixed the problem of data source heartbeat detection feedback error
  • Fix the possible 404 issue in front-end route jump
  • Fixed the issue of incorrect error prompt when global variable does not exist
  • Fixed the issue of cursor movement and flickering in the editor during front-end data development
  • Fixed the path error problem in the docker file of DockerfileDinkyFlink
  • Fixed the problem of unrecognized configuration Python options
  • Fixed null pointer exception in role user list
  • Fixed some issues when submitting K8s tasks
  • Fixed Oracle's Time type conversion problem when synchronizing the entire database
  • Fixed the problem that k8s pod template cannot be parsed correctly
  • Fixed the issue where SPI failed to load CodeGeneratorImpl
  • Fixed an issue where numeric columns declared with UNSIGNED / ZEROFILL keywords would cause parsing mismatches
  • Fixed the issue where the status of batch tasks is still unknown after completion
  • Fixed some unsafe interfaces that can be accessed without login authentication
  • Fixed the problem of unknown status in Pre-Job mode
  • Fixed the problem of retrieving multiple job instances due to duplicate Jid
  • Fixed the problem that the user list cannot be searched using worknum
  • Fixed the problem that the query data button on the right side of the result Tag page cannot be correctly rendered when querying data.
  • Fixed issues with print table syntax
  • Fixed the problem that the resource list cannot be refreshed after adding or modifying it
  • Fixed the issue of incorrect console rolling update task status for data development
  • Fixed the problem of occasional packaging failure
  • Fixed problems when building Git projects

Optimization

  • Optimize start and stop scripts
  • Optimize the problem of partial overflow of the global configuration page
  • Tips for optimizing UDF management
  • Optimize the user experience of the operation and maintenance center list page and support sorting by time
  • Optimize the warehouse address of default data in Git projects
  • Optimize flink jar task submission to support batch tasks
  • Optimize the problem that the right-click menu cannot be clicked when it overflows the visual area.
  • Optimize the primary key key of the list component of the operation and maintenance center
  • When modifying tasks, the modifiable template is optimized to an unmodifiable template.
  • Optimize the display method and type of cluster configuration
  • Optimize the logic of deleting clusters in K8s mode
  • Fixed the problem that the cluster is not automatically released in Application mode
  • Remove the logic of using Paimon for data source caching and change it to the default memory cache, which can be configured as redis cache
  • Removed the automatic pop-up of Console when switching tasks
  • Optimize the rendering logic of resource management. The resource management function cannot be used when resources are not turned on.
  • Optimize the detection logic of login status
  • Optimize login page feedback prompts
  • Removed some useless code on the front end
  • Optimize the problem that when the entire library is synchronized to build operator graphs multiple times, the order of the operator graphs is inconsistent, resulting in the inability to recover from the savepoint.
  • Some tips for optimizing resource allocation
  • Optimize and improve the replication function of the resource center, supporting all reference scenarios within Dinky currently

Safety

  • Exclude some high-risk jmx exposed endpoints

Document

  • Optimize expression variable expansion documentation
  • Optimize some practical documents for synchronization of the entire database
  • Add JDBC FAQ about tinyint type
  • Added a carousel image on the home page of the official document website
  • Fixed the description problem of resource configuration in document global configuration
  • Added documents related to environment configuration in global configuration
  • Delete some configuration items of Flink configuration in the global configuration
  • Added document configuration description for alarm type email

v1.0.1

13 Mar 15:11
3949fb5
Compare
Choose a tag to compare

Dinky-1.0.1 Release Note

1.0.1 is a BUG repair version, no database upgrade changes, can be directly upgraded

About SCALA version: The release version uses Scala-2.12. If your environment must use Scala-2.11, please compile it yourself.
Please refer to Compile Deployment and change the scala-2.12 in the profile. for scala-2.11

New Feature

  • Add some Flink Options classes to trigger shortcut prompts
  • Implement automatic scrolling of console logs during data development

Fix

  • Fixed the problem that the SMS alarm plug-in was not packaged
  • Fixed NPE exception and some other issues when creating UDF
  • Fixed job type rendering exception when creating tasks
  • Fixed the issue of page crash when viewing Catalog during data development
  • Fixed parameter configuration problem when using add jar with s3
  • Fix some issues with rs voluntary agreement
  • Fixed the routing error jump problem in the quick navigation in data development
  • Fixed the issue that the console was not closed when selecting UDF task type
  • Fixed the issue where the decimal data type exceeds 38 digits (more than 38 digits will be converted to string)
  • Fixed the problem that some pop-up boxes could not be closed
  • Fixed the problem that global variables cannot be recognized in application mode
  • Fixed the problem of array out-of-bounds when obtaining container in application mode
  • Fix the problem that add file cannot be parsed

Optimization

  • Optimize some front-end request URLs into agreed constants
  • Optimize the startup script and remove the FLINK_HOME environment variable loading
  • Optimize the prompt message when the password is incorrect
  • Optimize tag display of data development tasks
  • Turn off automatic preview in the data development editor
  • Optimize the expression variable definition method, changing from file definition to system configuration definition
  • Optimize the prompt message that query statements are not supported in application mode
  • Optimize the rendering effect of FlinkSQL environment list
  • Optimize the environment check exception prompt when building GIT projects
  • Optimize the cluster for NPE issues that may occur during heartbeat detection

Document

  • Added built-in variable documents for synchronization of the entire library
  • Optimize document version
  • Add EXECUTE JAR task DEMO
  • Optimize some copywriting tips when creating cluster configurations
  • Optimize some paths in the entire database synchronization document

v1.0.0

01 Mar 16:02
9d06354
Compare
Choose a tag to compare

Dinky-1.0.0 Release Note

Upgrade Instructions

  • Dinky 1.0 is a refactored version that restructures existing functions, adds several enterprise-level functions, and fixes some limitations of 0.7. There is currently no direct upgrade from 0.7 to 1.0. It is recommended to redeploy version 1.0.
  • Starting from Dinky 1.0, the Dinky community will no longer maintain all versions before 1.0.
  • Starting from Dinky version 1.0, the Dinky community will provide support for Flink 1.14.x and above, and will no longer maintain Flink versions below 1.14. At the same time, Flink has added some new features, which Dinky will gradually support.
  • Dinky 1.0 and later versions, every time Flink adds a new major version, Dinky will also add a new major version, and at the same time, a Dinky-Client version will be eliminated depending on the situation. Deleted versions may be subject to a vote, and the results of the vote determine the deleted version.
  • Four RC versions have been released successively during the reconstruction process. The RC version can be upgraded, but it is still recommended to redeploy the 1.0-RELEASE version. Avoid some location issues.
  • Users of Dinky version 0.7 can continue to use version 0.7, but no maintenance and support will be provided. It is recommended to install version 1.0 as soon as possible.

The changes from version 0.7 to version 1.0 are relatively large, and there are some incompatible changes. Users using version 0.7 cannot directly upgrade to version 1.0. It is recommended to redeploy version 1.0.

Incompatible changes

  • CDCSOURCE dynamic variable definition changed from ${} to #{}
  • Global variables such as _CURRENT_DATE_ are removed and replaced by expression variables
  • Flink Jar task definition is changed from form to EXECUTE JAR syntax
  • The definition of dinky-app-xxxx.jar in Application mode is moved to the cluster configuration
  • The database DDL part is not compatible with upgrades
  • The type attribute of Dinky's built-in Catalog is changed from dlink_catalog to dinky_catalog

Refactoring

  • Reconstruct data development
  • Reconstruct the operation and maintenance center
  • Reconstruct the registration center
  • Reconstruct the Flink task submission process
  • Reconstruct the Flink Jar task submission method
  • Reconstruct CDCSOURCE entire library synchronization code architecture
  • Reconstruct Flink task monitoring and alarming
  • Reconstruct permission management
  • Reconstruct system configuration to online configuration
  • Refactor push DolphinScheduler
  • Reconstruct the packaging method

new function

  • Data development supports code snippet prompts
  • Support real-time printing of Flink table data
  • Console real-time printing task submission log
  • Support Flink CDC 3.0 entire database synchronization
  • Support custom alarm rules and custom alarm templates
  • Support Flink k8s operator submission
  • Support proxy Flink webui access
  • Added Flink task Metrics to monitor custom charts
  • Support Dinky jvm monitoring
  • Added resource center functions (local, hdfs, oss) and expanded rs protocol
  • Added Git UDF/JAR project hosting and overall construction process
  • Supports full-mode Flink jar task submission
  • Added ADD CUSTOMJAR syntax to dynamically load dependencies
  • Added ADD FILE syntax to dynamically load files
  • openapi supports custom parameter submission
  • Permission system upgrade to support tenants, roles, tokens, and menu permissions
  • Support LDAP
  • Added new widget function to the data development page
  • Support pushing dependent tasks to DolphinScheduler
  • Implement the Flink instance stopping function
  • Implement CDCSOURCE synchronization of the entire database and ordering of data under multiple degrees of parallelism
  • Implement configurable alarm retransmission prevention function
  • Implement ordinary SQL that can be scheduled and executed by DolphinScheduler
  • Added the ability to obtain dependent JARs loaded in the system and group them into groups to facilitate troubleshooting JAR related issues
  • Implement cluster configuration test connection function
  • Support H2, Mysql, Postgre deployment, the default is H2

New syntax

  • CREATE TEMPORAL FUNCTION is used to define temporary table functions
  • ADD FILE is used to dynamically load class/configuration and other files
  • ADD CUSTOMJAR is used to dynamically load JAR dependencies
  • PRINT TABLE for real-time preview of table data
  • EXECUTE JAR is used to define Flink Jar tasks
  • EXECUTE PIPELINE is used to define Flink CDC 3.x entire library synchronization tasks

Fix

  • Fixed the problem of missing extends path in CLASS_PATH of auto.sh
  • Fixed the problem that the job list life cycle status value was not re-rendered after release/offline
  • Fixed Flink 1.18 set syntax not working and producing null error
  • Fixed the save point mechanism issue of submission history
  • Fixed the problem of creating views in Dinky Catalog
  • Fixed Flink application not throwing exception
  • Fixed incorrect rendering of alarm options
  • Fixed job life cycle issues
  • Fixed the problem that k8s YAML cannot be displayed in cluster configuration
  • Fixed a time-consuming formatting error in the operation and maintenance center job list
  • Fixed the problem of Flink dag prompt box
  • Fixed checkpoint path not found
  • Fixed node location error when pushing jobs to Dolphin Scheduler
  • Fixed the problem that job parameters did not take effect when the set configuration contained single quotes
  • Upgrade jmx_prometheus_javaagent to 0.20.0 to resolve some CVEs
  • Fixed checkpoint display problem
  • Repair job instance is always running
  • Fixed the problem of missing log printing after Yarn Application failed to submit a task
  • Fixed the problem that job configuration cannot render yarn prejob cluster
  • Fixed URL misspelling causing request failure
  • Fixed the problem of inserting the same token value when multiple users log in
  • Fixed alarm instance form rendering issue
  • Fixed the problem that FlinkSQLEnv could not be checked
  • Fixed the problem that set statement could not take effect
  • Fixed the problem of invalid yarn cluster configuration, customized Flink and hadoop configuration
  • Fixed the problem that the checkpoint information of the operation and maintenance center cannot be obtained
  • Fixed the problem that the status cannot be detected after the Yarn Application job is completed
  • Fixed the problem of no printing in the console log when yarn job submission failed
  • Fixed the issue where Flink instances started from cluster configuration cannot be selected in job configuration
  • Fixed RECONNECT status job status recognition error
  • Fixed an issue with FlinkJar tasks being submitted to PreJob mode
  • Fixed Dinky startup detection pid problem
  • Fixed the problem that caused conflicts when the built-in Paimon version was inconsistent with the user integrated version (implemented using shader)
  • Fixed the problem that the CheckPoint parameter of the FlinkJar task does not take effect in Application mode
  • Fixed the issue where the name and remark information were updated incorrectly when modifying the Task job
  • Fixed the issue where password is required when registering data source
  • Fixed the problem of incorrect heartbeat detection of cluster instances
  • Fixed the problem that Jar task submission cannot use set syntax
  • Fixed an issue where data development->job list cannot be folded in some cases
  • Fixed the problem of repeated sending of alarm information under multi-threading
  • Fixed the problem of tag height of data development->open job
  • Fixed the problem that the jobmanager log of the operation and maintenance center job details could not be displayed normally in some cases
  • Fixed Catalog NPE issues
  • Fixed the problem of incorrect prejob task status
  • Fixed add customjar syntax problem
  • Fixed the problem that the jar task could not be monitored
  • Fixed Token invalid exception
  • Fixed a series of problems caused by statement delimiters and removed the system configuration
  • Fixed the problem of task status rendering in the operation and maintenance center
  • Fixed the problem of failure to delete tasks when the job instance does not exist
  • Fixed duplicate exception alarm
  • Fixed some issues submitted by PythonFlink
  • Fixed the problem that Application Mode cannot use global variables
  • Fixed the problem that K8s task could not start due to uninitialized resource type
  • Fixed the pipeline acquisition error of the Jar task causing the front end to not work properly
  • Fix SqlServer timestamp conversion to string
  • Fixed NPE issue when publishing tasks with UDF
  • Fixed the problem of Jar task being unable to obtain execution history
  • Fixed the problem of front-end crash caused by NPE when Doris data source obtains DDL and queries

Optimization

  • Added key width for job configuration items
  • Optimize query job directory tree
  • Optimize Flink on yarn app submission
  • Optimize Explainer class to use builder pattern to build results
  • Optimize document management
  • Implement operator via SPI
  • Optimize document form pop-up layer
  • Optimize type rendering of Flink instances
  • Optimize the data source details search box
  • The method of obtaining the version is optimized to be returned by the backend interface
  • Optimize CANCEL job logic, and can forcefully stop the lost connection job
  • Optimize the detection reference logic when part of the registration center is deleted
  • You can specify a job template when creating an optimization job
  • Optimize Task deletion logic
  • Optimize some front-end internationalization
  • Optimize automatic switching between console and result tag during execution preview
  • Optimize the UDF download logic of K8S
  • Optimize the synchronization of the entire database and sub-databases and tables
  • Optimize the registration center->d...
Read more

Dinky v1.0.0-rc4

28 Jan 16:31
c3ea384
Compare
Choose a tag to compare

Feature:

  • Realize the synchronization of the entire database and orderly data under multiple degrees of parallelism
  • Implement HDFS HA in the resource center
  • Implement permission control for global configuration in the configuration center
  • Implement configurable alarm anti retransmission function
  • Implement DB SQL that can be scheduled by DolphinSchedule
  • New resource center synchronization directory based on configured resource storage type (currently implemented as oss)

Fix:

  • Fix the issue of incorrect heartbeat detection in cluster instances
  • Fix delimiter issues
  • Fix the issue of Jar task submission not being able to use set syntax
  • Fix the issue of NPE when obtaining user related information through LDAP
  • Fix the issue of being unable to carry the higher-level ID when assigning menu permissions
  • Fix the issue of version history not being updated properly when switching keys in data development
  • Fix some default value issues in PG SQL files
  • Fix Dinky's inability to start due to resource configuration errors
  • Fix the issue of default route redirection in permission control
  • Fix the issue of data development ->task list section not being able to fold
  • Fix the issue of duplicate sending of alarm information under multithreading
  • Fix the issue of tag height in data development ->open job
  • Fix authentication related issues when integrating Gitlab
  • Fix the issue where jobmanager logs in the operation and maintenance center's job details cannot be displayed properly
  • Fix issues with CataLog NPE
  • Fix the issue of Yarn's port being 0
  • Fix front-end form status issues with data sources
  • Fix the issue with Kubeconfig acquisition
  • Fix the issue of pre job task status errors
  • Fix syntax issues with add customjar
  • Fix some web NPE exceptions
  • Fix a bug in enabling SSL when the alarm instance is of email type
  • Fix the issue of inability to monitor jar tasks

Optimization & Improve:

  • Optimize the UDF download logic of K8S
  • Optimize CDC3.0 related logic
  • Optimize the synchronization of the entire database by sub database and sub table
  • Optimize and integrate LDAP logic
  • Optimize the logic of redirecting the data source list to the details page in the registry ->Data source list
  • Optimization of homework configuration logic (homework configuration cannot be edited in the published status)
  • Optimize the cluster instance rendering logic for job configuration in data development
  • Optimize the startup script to enable configuration of environment variables for startup

Document:

  • Optimize documents for database and table partitioning
  • Optimize regular deployment documents
  • Add relevant documents on alarm anti resend
  • Optimize OpenAPI documentation
  • Add HDFS HA configuration document

Contributors

@aiwenmo
@gaoyan1998
@izouxv
@JiaLiangC
@kylinmac
@leechor
@yangzehan
@yqwoe
@zackyoungh

Dinky v1.0.0-rc3

08 Jan 15:37
df094b9
Compare
Choose a tag to compare

New Feature

  • The default Flink startup version is modified to 1.16
  • Implement CodeShow component line break button
  • Implement Flink instance stopping function
  • Realize deletion of defined task monitoring layout

Optimization

  • The method of obtaining the version is optimized to be returned by the backend interface
  • Optimize the CANCEL job logic, and can forcefully stop the lost connection job
  • Optimize the detection reference logic when part of the registration center is deleted
  • You can specify a job template when creating an optimization job
  • Optimize Task deletion logic
  • Optimize some front-end internationalization
  • Optimize Dinky process PID detection logic
  • Optimize automatic switching between console and result tag during execution preview

Fix

  • Fixed alarm instance form rendering issue
  • Fixed the problem that FlinkSQLEnv could not be checked
  • Fixed the problem that set statement could not take effect
  • Fixed the problem of invalid yarn cluster configuration, customized Flink and hadoop configuration
  • Fix some problems in Prejob mode
  • Fixed the problem that the checkpoint information of the operation and maintenance center cannot be obtained
  • Fixed the problem that the status cannot be detected after the Yarn Application job is completed
  • Fixed the problem that the console log failed to print when yarn job submission failed.
  • Fix the problem of getting savepoint list 404
  • Fixed the issue where Flink instances started from the cluster configuration could not be selected in the job configuration.
  • Fixed RECONNECT status job status recognition error
  • Fixed the problem that the end time of the operation and maintenance center list is 1970-01-01
  • Fixed the problem of submitting FlinkJar tasks to PreJob mode
  • Fixed the repeated introduction of dependencies in the alarm module, causing conflicts
  • Fix the problem of Dinky startup detection pid
  • Fix the problem of conflict caused by inconsistent version of built-in Paimon and user integration (implemented using shader)
  • Fix the syntax regular issue of execute jar
  • Fixed the problem that the CheckPoint parameter does not take effect in the Application mode of the FlinkJar task
  • Fixed the issue where the name and remark information were updated incorrectly when modifying Task operations
  • Fixed the problem that password is required when registering data source

Document

  • Add some data development related documents
  • Optimize some documents of the registration center
  • Remove some deprecated/wrong documentation
  • Adjust some document structures
  • Add quick start document
  • Add deployment documents

@aiwenmo
@drgnchan
@gaoyan1998
@gitfortian
@gitjxm
@leechor
@leeoo
@Logout-y
@MaoMiMao
@Pandas886
@yangzehan
@YardStrong
@zackyoungh
@Zzm0809

Dinky v1.0.0-rc2

01 Jan 15:14
6df7791
Compare
Choose a tag to compare

Fix:
[Fix-2739] Fix bug that complete the missing path in auto.sh's CLASS_PATH
[Fix-2740] Fixed issue of re-rendering task list after publishing or offline
[Fix] Fix flink 1.18 set operator not work and configure null error
[Fix] Fix the bug of save_point_strategy in submission history
[Fix] Fix the bug of print flink table
[Fix] Fix the bug of create view to ddl catalog
[Fix] Fix flink application not throw exception
[Fix] Fix the alert option is incorrect
[Fix] Fix the bug of job life cycle
[Fix-2754] Fix the YAML of K8s form in the cluster is not displayed
[Fix-2756] Fix the devops job list duration formate error
[Fix-2777] Fix flink dag tooltip
[Fix-2782]Fix checkpoint path not found
[Fix] Fix the locations bug in pushing task to DolphinScheduler
[Fix-2806] The job parameters are not effective when the set parameters key and value contain single quotes
[Fix-2811] Upgrade jmx_prometheus_javaagent to 0.20.0 to fix some CVE
[Fix-2814] Fix checkpoint overview error
[Fix] Fix Flink catalog does not take effect with add_jar
[Fix] Fix some devops bug
[Fix-2832] Fix h2 driver no default packaging problem
[Fix] Fix sql bug
[Fix] Fixed jobInstance was always in the running state
[Fix-2843] Fix Yarn Application mode submission task failed and lack of log printing
[Fix] Fix the bug of udf in h2
[Fix-2823] Fix jobconfig cannot render yarn prejob cluster
[Fix] Fix URL misspelling causing the request to fail
[Fix-2855] Fix savepoint table params bug
[Fix-2776] Fix multi user login with the same token value insert error

Optimization & Improve:
[Improve] Improve extract yaml from execute pipeline command
[Optimization] Add key width for job configure item
[Optimization] Add dinky port configure in PrintNetSink
[Improve] Improve query catalog tree
[Optimization-2773] Optimize the data source directory tree has two scroll bars
[Optimization-2822] Optimize metrics page tips
[Optimization] Optimize Flink on yarn app submit
[Optimization] Optimize explainer class use user builder for result
[Optimization] Optimize document management
[Optimization] Implement operator with SPI
[Improve] Improve document form layout
[Optimization-2757] Optimize Flink instance render type
[Optimization-2755] Optimize datasource detail search box
[Optimization] Add resource implement for DinkyClassLoader

Document:
[Document] Improve the cluster instance list document for the registration center
[Document] Improve the alert document for the registration center
[Document] Improve the git project document for the registration center
[Document] Improve the k8s document for the quick start
[Document] Modify domain name
[Document] Improve documents in registration center and authentication center
[Document] Improve documents in developer guide
[Document] Add parameter description in CDCSOURCE and example for debezium.*
[Document-2830] Update download
[Document] Modify document struct

Contributors:
@aiwenmo
@gaoyan1998
@gitfortian
@leeoo
@leechor
@stdnt-xiao
@yangzehan
@zackyoungh
@Zzm0809

Dinky v1.0.0-rc1

24 Dec 16:57
5be6960
Compare
Choose a tag to compare
Dinky v1.0.0-rc1 Pre-release
Pre-release

Introduction

Dinky is a data development platform based on Apache Flink, which enables agile data development and deployment.

Upgrade instructions

Dinky 1.0 is a refactored version that restructures existing functions, adds several enterprise-level functions, and fixes some limitations of 0.7. Currently, it is not possible to directly upgrade from 0.7 to 1.0. An upgrade plan will be provided in the future.

Function

Its main functions are as follows:

  • FlinkSQL data development: automatic prompt completion, syntax highlighting, statement beautification, syntax verification, execution plan, MetaStore, lineage analysis, version comparison, etc.
  • Support FlinkSQL multi-version development and multiple execution modes: Local, Standalone, Yarn/Kubernetes Session, Yarn Per-Job, Yarn/Kubernetes Application
  • Support Apache Flink ecosystem: Connector, FlinkCDC, Paimon, etc.
  • Support FlinkSQL syntax enhancement: whole database synchronization, execution environment, global variables, statement merging, table value aggregation function, loading dependencies, row-level permissions, Jar submission, etc.
  • Support FlinkCDC real-time warehousing of the entire database into the lake: multi-database output, automatic table creation, model evolution, sub-database and sub-table
  • Supports SQL job development and metadata browsing: ClickHouse, Doris, Hive, Mysql, Oracle, Phoenix, PostgreSql, Presto, SqlServer, StarRocks, etc.
  • Support Flink real-time online debugging preview TableData, ChangeLog, Operator, Catalog
  • Support Flink job custom monitoring statistical analysis and custom alarm rules.
  • Support real-time task operation and maintenance: online and offline, job information (supports obtaining checkpoint), job log, version information, job snapshot, monitoring, SQL lineage, alarm record, etc.
  • Support real-time job alarms and alarm groups: DingTalk, WeChat business account, Feishu, email, SMS, etc.
  • Supports automatically hosted SavePoint/CheckPoint recovery and triggering mechanisms: latest, earliest, specified, etc.
  • Supports multiple resource management: cluster instances, cluster configurations, data sources, alarms, documents, global variables, Git projects, UDFs, system configurations, etc.
  • Support enterprise-level management: tenants, users, roles, menus, tokens, data permissions

New Feature

  • Added new homepage signboard
  • Data development supports code tips
  • Supports real-time printing of Flink table data
  • The console supports real-time printing task submission log
  • Support Flink CDC 3.0 entire database synchronization
  • Support customized alarm rules and customized alarm information templates
  • Comprehensive revision of the operation and maintenance center
  • k8s and k8s operator support
  • Support proxy Flink webui access
  • Support Flink task monitoring
  • Support Dinky jvm monitoring
  • New resource center function and expanded rs protocol
  • New Git UDF/JAR project hosting and overall construction process
  • Supports full-mode application mode custom jar submission
  • openapi supports custom parameter submission
  • Permission system upgrade, supporting tenants, roles, tokens, and menu permissions
  • LDAP authentication support
  • New widget function on data development page
  • Support pushing dependent tasks to DolphinScheduler