The Father of SQL Project: Quickly generate SQL and simulation data, greatly improving development and testing efficiency!
Front-end and back-end full stack project By programmer Yupi
It is not easy to make, please do not use it for commercial use or resale! ! !
Online experience: http://sqlfather.yupi.icu
Video demonstration (usage tutorial): https://www.bilibili.com/video/BV1eP411N7B7/
If you think this project is helpful, giving the UP owner a follow and three consecutive links is the greatest support, thank you!
Front-end code repository: https://github.com/liyupi/sql-father-frontend-public
Backend code repository: https://github.com/liyupi/sql-father-backend-public
️ I gave you a detailed explanation of the creative ideas + technology selection + system design + source code interpretation + resume writing method of this project in my own programming knowledge planet. If you want to write this project on your resume or study it in depth, you are welcome Join My Planet to watch.
The origin of the project was that Yupi wanted to solve the trouble of repeatedly writing SQL to create tables and create data when developing projects. By the way, it was open sourced for everyone to learn and improve together~
Just imagine: when I work on a new project, I don’t need to write SQL to create a table or create data, but can directly get a table with fake data. How great would that be?
Some students asked why the project was not called SQL Mother? Then let me ask you why inheritance is called the parent class?
Whether you are a front-end, back-end, testing, data development, data science, or research student, I believe this tool will be helpful to you!
The main application scenarios are as follows:
1) By filling in visual forms, quickly generate table creation statements, simulated data and codes, and say goodbye to repetitive work!
2) Supports multiple quick import methods. For example, if you already have a ready-made data table, you can directly import table creation statements and generate simulated data with one click; you can also directly import Excel tables to quickly complete table creation; it even supports intelligent import, and the table and data will be automatically generated by entering a few words!
3) Supports multiple rules for generating simulation data. For example, fixed values, random values, regular expressions, incremental values, and even support the selection of vocabulary to generate random values within a specific range!
4) Support vocabulary, table design, and field information sharing. You can learn or refer to other students' library table designs, or directly use ready-made library tables and fields to generate or perform secondary development with one click. Long live collaboration!
5) You can directly use the ready-made lexicon to build a dictionary, or use it as a data set for research, and support secondary improvement of the lexicon!
The project itself has complete functions (divided into user frontend and management backend), meets the online standards, has clear architecture design and standardized directory structure.
The front-end uses complex nested & dynamic & collapsible forms and code editors; the back-end uses a variety of mainstream design patterns, AOP aspect authentication, etc., which is very worthy of learning by friends.
Also, please give me some advice from the big guys?
Main technologies:
Dependent libraries:
Main technologies:
Dependent libraries:
Install dependencies:
npm run install
run:
npm run dev
Pack:
npm run build
It mainly shares the overall architecture and core design of the system, and does not introduce too much about the traditional web development part.
Core design concept: Unify each input method into a clear Schema, and generate various types of content based on the Schema.
The architecture design diagram is as follows, that is, any input => unified Schema => any output:
The system is divided into the following core modules, with each module having clear responsibilities:
The code of the core module is in the backend core directory.
Core class: TableSchemaBuilder, whose function is to converge different parameters into TableSchema objects.
Contains the following methods:
Among them, buildFromSql (generating Schema based on SQL) uses the syntax parser that comes with the Druid database connection pool, which is very powerful. (Generally, you should not write things like parsers yourself. You can do several projects in this time, and writing them is not as easy to use as others)
Used to save table and field information, the structure is as follows:
{
"dbName" : "库名" ,
"tableName" : " test_table " ,
"tableComment" : "表注释" ,
"mockNum" : 20 ,
"fieldList" : [{
"fieldName" : " username " ,
"comment" : "用户名" ,
"fieldType" : " varchar(256) " ,
"mockType" : "随机" ,
"mockParams" : "人名" ,
"notNull" : true ,
"primaryKey" : false ,
"autoIncrement" : false
}]
}
Define each build type as a Builder (core/builder directory):
Among them, for the SQL code generator (SqlBuilder), dialects are used to support different database types (strategy mode), and singleton mode + factory mode is used to create dialect instances.
For Java, front-end code generators (JavaCodeBuilder, FrontendCodeBuilder) are generated using the FreeMarker template engine. The template code is as follows:
Each generation rule is defined as a Generator, and DataGeneratorFactory (factory mode) is used to create and manage multiple Generator instances uniformly.
Use the dataFaker library to implement random data generation (RandomDataGenerator).
Use the Generex library to implement regular expression data generation (RuleDataGenerator).
Use the facade pattern to aggregate various generation types and provide unified generation calling and verification methods:
Including thesaurus, table information, and field information sharing, it is actually a web service for adding, deleting, modifying, and querying these entities, so I won’t go into details.
If you want to get a complete and detailed explanation of this project, welcome to join Yupi's Programming Knowledge Planet. This is Yupi's programming learning circle. In the circle, I will lead everyone to analyze and interpret this project from 0 to 1, and answer everyone's questions one-on-one. Not only will you be able to do one independently, but it will also teach you how to put this project on your resume. Project experience + 1.
All friends are welcome to contribute, and please read the following carefully first:
Partial vocabulary source: https://github.com/fighting41love/funNLP
Sample table information source: https://open.yesapi.cn/list1.html