Introduction to the

In the last two articles, we explored the generation of SQLToken and real SQL. This article continues to explore the generation of LogicSQL from the beginning to complete the puzzle

The source code to explore

Continue the exploration of the previous two chapters:

  • Analysis and generation of ShardingSphere statements
  • ShardingSphere SQLToken generation exploration

There are two elements to generate real SQL:

  • Logical table name to real table name mapping: This is generated in SQLToken
  • When concatenating SQL statements, the corresponding index position is generated in LogicSQL

So far, we still need to generate index, so let’s look at the LogicSQL code:

Looking for an entry point

Here is the LogicSQL generation section:

    private ExecutionContext createExecutionContext(a) {
        LogicSQL logicSQL = createLogicSQL();
        SQLCheckEngine.check(logicSQL.getSqlStatementContext().getSqlStatement(), logicSQL.getParameters(), 
                metaDataContexts.getMetaData(connection.getSchemaName()).getRuleMetaData().getRules(), connection.getSchemaName(), metaDataContexts.getMetaDataMap(), null);
        ExecutionContext result = kernelProcessor.generateExecutionContext(logicSQL, metaDataContexts.getMetaData(connection.getSchemaName()), metaDataContexts.getProps());
        findGeneratedKey(result).ifPresent(generatedKey -> generatedValues.addAll(generatedKey.getGeneratedValues()));
        return result;
    }
    
    private LogicSQL createLogicSQL(a) {
        List<Object> parameters = newArrayList<>(getParameters()); SQLStatementContext<? > sqlStatementContext = SQLStatementContextFactory.newInstance(metaDataContexts.getMetaDataMap(), parameters, sqlStatement, connection.getSchemaName());return new LogicSQL(sqlStatementContext, sql, parameters);
    }
Copy the code

However, you can see that the related items are generated before using debug:

PreparedStatement (index_index) {preparedStatement (index_index) {preparedStatement (index_index);

# ShardingSpherePreparedStatement.java
    private ShardingSpherePreparedStatement(final ShardingSphereConnection connection, final String sql,
                                            final int resultSetType, final int resultSetConcurrency, final int resultSetHoldability, final boolean returnGeneratedKeys) throws SQLException {
        if (Strings.isNullOrEmpty(sql)) {
            throw new SQLException(SQLExceptionConstant.SQL_STRING_NULL_OR_EMPTY);
        }
        this.connection = connection;
        metaDataContexts = connection.getContextManager().getMetaDataContexts();
        this.sql = sql;
        statements = new ArrayList<>();
        parameterSets = new ArrayList<>();
        ShardingSphereSQLParserEngine sqlParserEngine = newShardingSphereSQLParserEngine( DatabaseTypeRegistry.getTrunkDatabaseTypeName(metaDataContexts.getMetaData(connection.getSchemaName()).getResource().get DatabaseType()));// it is generated here
        sqlStatement = sqlParserEngine.parse(sql, true);
        parameterMetaData = new ShardingSphereParameterMetaData(sqlStatement);
        statementOption = returnGeneratedKeys ? new StatementOption(true) : new StatementOption(resultSetType, resultSetConcurrency, resultSetHoldability);
        JDBCExecutor jdbcExecutor = new JDBCExecutor(metaDataContexts.getExecutorEngine(), connection.isHoldTransaction());
        driverJDBCExecutor = new DriverJDBCExecutor(connection.getSchemaName(), metaDataContexts, jdbcExecutor);
        rawExecutor = new RawExecutor(metaDataContexts.getExecutorEngine(), connection.isHoldTransaction(), metaDataContexts.getProps());
        // TODO Consider FederateRawExecutor
        federateExecutor = new FederateJDBCExecutor(connection.getSchemaName(), metaDataContexts.getOptimizeContextFactory(), metaDataContexts.getProps(), jdbcExecutor);
        batchPreparedStatementExecutor = new BatchPreparedStatementExecutor(metaDataContexts, jdbcExecutor, connection.getSchemaName());
        kernelProcessor = new KernelProcessor();
    }
Copy the code

Let’s go back a bit and see that orderRepositoryImpl.java is triggered:

# OrderRepositoryImpl.java
    @Override
    public Long insert(final Order order) throws SQLException {
        String sql = "INSERT INTO t_order (user_id, address_id, status) VALUES (? ,? ,?) ";
        try (Connection connection = dataSource.getConnection();
             PreparedStatement preparedStatement = connection.prepareStatement(sql, Statement.RETURN_GENERATED_KEYS)) {
            preparedStatement.setInt(1, order.getUserId());
            preparedStatement.setLong(2, order.getAddressId());
            preparedStatement.setString(3, order.getStatus());
            preparedStatement.executeUpdate();
            try (ResultSet resultSet = preparedStatement.getGeneratedKeys()) {
                if (resultSet.next()) {
                    order.setOrderId(resultSet.getLong(1)); }}}return order.getOrderId();
    }
Copy the code

Parse (SQL, true); sqlStatement = sqlParserEngine. Parse (SQL, true);

We always go in after them, came to a SQL processing related classes: MySQLStatementParser. Java

At first glance it is quite complex, and we follow the debug to see the branch into the relevant insert processing

			case XA:
				enterOuterAlt(_localctx, 1);
				{
				setState(1246);
				_errHandler.sync(this);
				switch ( getInterpreter().adaptivePredict(_input,0,_ctx) ) {
				case 1:
					{
					setState(1146);
					select();
					}
					break;
				case 2:
					{
					setState(1147);
					insert();
					}
					break;

Copy the code

Next, the insert statement handles specific functions:

# MySQLStatementParser.java
	public final InsertContext insert(a) throws RecognitionException {
		InsertContext _localctx = new InsertContext(_ctx, getState());
		enterRule(_localctx, 2, RULE_insert);
		int _la;
		try {
			enterOuterAlt(_localctx, 1);
			{
			setState(1258);
			// Insert related processing
			match(INSERT);
			setState(1259);
			insertSpecification();
			setState(1261);
			_errHandler.sync(this);
			_la = _input.LA(1);
			if (_la==INTO) {
				{
				setState(1260);
				// into related processing
				match(INTO);
				}
			}

			setState(1263);
			// Table name related processing
			tableName();
			setState(1265);
			_errHandler.sync(this);
			_la = _input.LA(1);
			if (_la==PARTITION) {
				{
				setState(1264);
				partitionNames();
				}
			}

			setState(1270);
			_errHandler.sync(this);
			switch ( getInterpreter().adaptivePredict(_input,6,_ctx) ) {
			case 1:
				{
				setState(1267);
				// values Related processing
				insertValuesClause();
				}
				break;
			case 2:
				{
				setState(1268);
				setAssignmentsClause();
				}
				break;
			case 3:
				{
				setState(1269);
				insertSelectClause();
				}
				break;
			}
			setState(1273);
			_errHandler.sync(this);
			_la = _input.LA(1);
			if (_la==ON) {
				{
				setState(1272); onDuplicateKeyClause(); }}}}catch (RecognitionException re) {
			_localctx.exception = re;
			_errHandler.reportError(this, re);
			_errHandler.recover(this, re);
		}
		finally {
			exitRule();
		}
		return _localctx;
	}
Copy the code

In the above function, we can see a few key handlers in general:

  • Match (Insert);
  • Into: match(into);
  • TableName ();
  • Values: insertValuesClause();

Its rules follow down a little complex, there are loops and nested processing, is not clear combing

But the general idea is to get the corresponding start and end positions and so on, as shown below:

The final result is as follows:

Upon receiving the result, the relevant return function is as follows:

@RequiredArgsConstructor
public final class SQLParserExecutor {
    
    private final String databaseType;
    
    /**
     * Parse SQL.
     * 
     * @param sql SQL to be parsed
     * @return parse tree
     */
    public ParseTree parse(final String sql) {
        ParseASTNode result = twoPhaseParse(sql);
        if (result.getRootNode() instanceof ErrorNode) {
            throw new SQLParsingException("Unsupported SQL of `%s`", sql);
        }
        returnresult.getRootNode(); }}Copy the code

Result. GetRootNode () as follows:

@RequiredArgsConstructor
public final class ParseASTNode implements ASTNode {
    
    private final ParseTree parseTree;
    
    /**
     * Get root node.
     * 
     * @return root node
     */
    public ParseTree getRootNode(a) {
        return parseTree.getChild(0); }}Copy the code

GetChild (0) = getChild (0)

conclusion

The feeling looks dazed, a lot of places still cannot understand very well at present

But at least we know the critical path to real SQL from this exploration:

  • Through the original LogicSQL statement, through the syntax tree parsing of ShardingSphere, the corresponding metadata of each part, such as the start and end index
  • According to the parse result of the syntax tree, the corresponding SQLToken is obtained, which contains key information such as the mapping between logical table in the sub-table and real table
  • Generate real SQL based on SQLToken