可执行教学级关系型数据库原型(Spring Boot + Java)

可执行教学级关系型数据库原型(Spring Boot + Java)

编码文章call10242025-09-12 20:42:083A+A-

先说清楚:你要的是一个按生产规范组织的、可运行的“关系型数据库原型”,包含最小但成体系的 SQL 引擎与存储引擎。下面给出一套可编译运行的 Maven 工程代码与说明,覆盖核心要素(SQL 解析/优化/执行、事务 ACID、8K 页、缓冲池、锁、WAL Redo、元数据管理)。它是教学与验证性质的最小闭环,便于扩展到更复杂场景。

重要边界:

  • SQL 语法:子集(CREATE TABLE、INSERT、SELECT…WHERE、BEGIN/COMMIT/ROLLBACK、SHOW TABLES/SCHEMA)。
  • 事务:读已提交(Read Committed),WAL(Redo-only)+ 简化恢复(重放 Redo)。
  • 锁:表级/行级排它锁(简化),无死锁检测(使用超时)。
  • 缓冲池:LRU 近似策略。
  • 存储:8K 页、堆表、变长行记录。
  • 元数据:内置系统表(catalog)+ 磁盘持久化。

目录结构与代码如下。


1. 工程结构

mini-rdbms/
├─ pom.xml
├─ README.md
├─ src/
│  ├─ main/java/com/example/rdbms/
│  │  ├─ MiniRdbmsApplication.java
│  │  ├─ api/
│  │  │  └─ SqlController.java
│  │  ├─ sql/
│  │  │  ├─ SqlParser.java
│  │  │  ├─ AstNodes.java
│  │  │  ├─ Planner.java
│  │  │  └─ Executor.java
│  │  ├─ engine/
│  │  │  ├─ txn/
│  │  │  │  ├─ TransactionManager.java
│  │  │  │  └─ LockManager.java
│  │  │  ├─ storage/
│  │  │  │  ├─ Page.java
│  │  │  │  ├─ PageId.java
│  │  │  │  ├─ BufferPool.java
│  │  │  │  ├─ FileStorage.java
│  │  │  │  ├─ Row.java
│  │  │  │  ├─ TableHeap.java
│  │  │  │  └─ RedoLog.java
│  │  │  ├─ meta/
│  │  │  │  ├─ Column.java
│  │  │  │  ├─ TableSchema.java
│  │  │  │  └─ Catalog.java
│  │  │  └─ util/
│  │  │     ├─ ByteSlice.java
│  │  │     └─ LruCache.java
│  │  └─ config/
│  │     └─ EngineConfig.java
│  └─ main/resources/
│     └─ application.yml
└─ src/test/java/com/example/rdbms/BasicFlowTest.java

2. 构建配置

<!-- pom.xml -->
<project xmlns="http://maven.apache.org/POM/4.0.0"  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:schemaLocation="http://maven.apache.org/POM/4.0.0  http://maven.apache.org/xsd/maven-4.0.0.xsd">
  <modelVersion>4.0.0</modelVersion>
  <groupId>com.example</groupId>
  <artifactId>mini-rdbms</artifactId>
  <version>1.0.0</version>
  <properties>
    <java.version>17</java.version>
    <spring.boot.version>3.3.2</spring.boot.version>
  </properties>
  <dependencies>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-web</artifactId>
      <version>${spring.boot.version}</version>
    </dependency>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-validation</artifactId>
      <version>${spring.boot.version}</version>
    </dependency>
    <dependency>
      <groupId>org.springframework.boot</groupId>
      <artifactId>spring-boot-starter-test</artifactId>
      <version>${spring.boot.version}</version>
      <scope>test</scope>
    </dependency>
  </dependencies>
  <build>
    <plugins>
      <plugin>
        <groupId>org.springframework.boot</groupId><artifactId>spring-boot-maven-plugin</artifactId>
      </plugin>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId><artifactId>maven-compiler-plugin</artifactId>
        <configuration><source>17</source><target>17</target></configuration>
      </plugin>
    </plugins>
  </build>
</project>
# src/main/resources/application.yml
server:
  port: 8088

rdbms:
  dataDir: ./data
  pageSize: 8192
  bufferPoolSize: 128
  logFile: ./data/redo.log

3. 启动与 REST 接口

// src/main/java/com/example/rdbms/MiniRdbmsApplication.java
package com.example.rdbms;

import org.springframework.boot.SpringApplication;
import org.springframework.boot.autoconfigure.SpringBootApplication;

@SpringBootApplication
public class MiniRdbmsApplication {
  public static void main(String[] args) {
    SpringApplication.run(MiniRdbmsApplication.class, args);
  }
}
// src/main/java/com/example/rdbms/api/SqlController.java
package com.example.rdbms.api;

import com.example.rdbms.sql.Executor;
import com.example.rdbms.sql.Planner;
import com.example.rdbms.sql.SqlParser;
import jakarta.validation.constraints.NotBlank;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.*;

import java.util.Map;

@RestController
@RequestMapping("/sql")
public class SqlController {
  private final SqlParser parser = new SqlParser();
  private final Planner planner = new Planner();
  private final Executor executor = new Executor();

  @PostMapping("/execute")
  public ResponseEntity<?> execute(@RequestBody Map<String, String> payload) {
    String sql = payload.getOrDefault("sql", "");
    if (sql == null || sql.isBlank()) return ResponseEntity.badRequest().body("empty sql");
    try {
      var ast = parser.parse(sql);
      var plan = planner.plan(ast);
      var result = executor.execute(plan);
      return ResponseEntity.ok(result);
    } catch (Exception e) {
      return ResponseEntity.badRequest().body(Map.of("error", e.getMessage()));
    }
  }
}

4. SQL 引擎

AST 仅实现子集:CREATE TABLE、INSERT、SELECT、BEGIN/COMMIT/ROLLBACK、SHOW。

// src/main/java/com/example/rdbms/sql/AstNodes.java
package com.example.rdbms.sql;

import java.util.List;
import java.util.Map;

public sealed interface AstNodes {
  record CreateTable(String table, List<ColumnDef> columns) implements AstNodes {}
  record ColumnDef(String name, String type, boolean notNull) {}
  record Insert(String table, List<String> cols, List<List<String>> values) implements AstNodes {}
  record Select(String table, List<String> cols, Map<String,String> whereEq) implements AstNodes {}
  record Begin() implements AstNodes {}
  record Commit() implements AstNodes {}
  record Rollback() implements AstNodes {}
  record ShowTables() implements AstNodes {}
  record ShowSchema(String table) implements AstNodes {}
}
// src/main/java/com/example/rdbms/sql/SqlParser.java
package com.example.rdbms.sql;

import java.util.*;
import java.util.regex.Matcher;
import java.util.regex.Pattern;

import static com.example.rdbms.sql.AstNodes.*;

public class SqlParser {
  private static final Pattern CREATE = Pattern.compile(
      "^\\s*CREATE\\s+TABLE\\s+(\\w+)\\s*\\((.+)\\)\\s*;?\\s*#34;, Pattern.CASE_INSENSITIVE);
  private static final Pattern INSERT = Pattern.compile(
      "^\\s*INSERT\\s+INTO\\s+(\\w+)\\s*\\(([^)]+)\\)\\s*VALUES\\s*(.+);?\\s*#34;, Pattern.CASE_INSENSITIVE);
  private static final Pattern VALUES = Pattern.compile("\\(([^)]+)\\)");
  private static final Pattern SELECT = Pattern.compile(
      "^\\s*SELECT\\s+(.+)\\s+FROM\\s+(\\w+)(?:\\s+WHERE\\s+(.+))?\\s*;?\\s*#34;, Pattern.CASE_INSENSITIVE);

  public AstNodes parse(String sql) {
    String s = sql.trim();
    if (s.equalsIgnoreCase("BEGIN")) return new Begin();
    if (s.equalsIgnoreCase("COMMIT")) return new Commit();
    if (s.equalsIgnoreCase("ROLLBACK")) return new Rollback();
    if (s.equalsIgnoreCase("SHOW TABLES")) return new ShowTables();
    if (s.toUpperCase().startsWith("SHOW SCHEMA")) {
      var parts = s.split("\\s+");
      return new ShowSchema(parts[2]);
    }
    Matcher m;
    m = CREATE.matcher(s);
    if (m.matches()) {
      String table = m.group(1);
      String cols = m.group(2);
      List<ColumnDef> defs = new ArrayList<>();
      for (String c : cols.split(",")) {
        var p = c.trim().split("\\s+");
        defs.add(new ColumnDef(p[0], p[1].toUpperCase(), Arrays.stream(p).anyMatch(x -> x.equalsIgnoreCase("NOT") || x.equalsIgnoreCase("NULL"))));
      }
      return new CreateTable(table, defs);
    }
    m = INSERT.matcher(s);
    if (m.matches()) {
      String table = m.group(1);
      List<String> cols = Arrays.stream(m.group(2).split(",")).map(String::trim).toList();
      String vs = m.group(3);
      List<List<String>> values = new ArrayList<>();
      Matcher vm = VALUES.matcher(vs);
      while (vm.find()) {
        String inside = vm.group(1);
        var row = new ArrayList<String>();
        for (String token : splitCsvRespectQuotes(inside)) row.add(unquote(token.trim()));
        values.add(row);
      }
      return new Insert(table, cols, values);
    }
    m = SELECT.matcher(s);
    if (m.matches()) {
      List<String> cols = Arrays.stream(m.group(1).split(",")).map(String::trim).toList();
      String table = m.group(2);
      Map<String,String> where = new HashMap<>();
      if (m.group(3) != null) {
        for (String cond : m.group(3).split("AND")) {
          String[] kv = cond.split("=");
          where.put(kv[0].trim(), unquote(kv[1].trim()));
        }
      }
      return new Select(table, cols, where);
    }
    throw new IllegalArgumentException("Unsupported SQL: " + sql);
  }

  private static List<String> splitCsvRespectQuotes(String s) {
    List<String> out = new ArrayList<>();
    StringBuilder cur = new StringBuilder();
    boolean inQ = false;
    for (char ch : s.toCharArray()) {
      if (ch == '\'') { inQ = !inQ; cur.append(ch); }
      else if (ch == ',' && !inQ) { out.add(cur.toString()); cur.setLength(0); }
      else { cur.append(ch); }
    }
    if (cur.length() > 0) out.add(cur.toString());
    return out;
  }
  private static String unquote(String s) {
    s = s.trim();
    if (s.startsWith("'") && s.endsWith("'")) return s.substring(1, s.length()-1).replace("''","'");
    return s;
  }
}
// src/main/java/com/example/rdbms/sql/Planner.java
package com.example.rdbms.sql;

import com.example.rdbms.sql.plan.*;

public class Planner {
  public Plan plan(Object ast) {
    return switch (ast) {
      case com.example.rdbms.sql.AstNodes.CreateTable ct -> new CreateTablePlan(ct);
      case com.example.rdbms.sql.AstNodes.Insert ins -> new InsertPlan(ins);
      case com.example.rdbms.sql.AstNodes.Select sel -> new SelectPlan(sel);
      case com.example.rdbms.sql.AstNodes.Begin b -> new BeginPlan();
      case com.example.rdbms.sql.AstNodes.Commit c -> new CommitPlan();
      case com.example.rdbms.sql.AstNodes.Rollback r -> new RollbackPlan();
      case com.example.rdbms.sql.AstNodes.ShowTables st -> new ShowTablesPlan();
      case com.example.rdbms.sql.AstNodes.ShowSchema ss -> new ShowSchemaPlan(ss);
      default -> throw new IllegalArgumentException("Unknown AST");
    };
  }
}

新增执行计划类:

// src/main/java/com/example/rdbms/sql/plan/Plan.java
package com.example.rdbms.sql.plan;
public interface Plan {}
// 其余计划类(CreateTablePlan/InsertPlan/SelectPlan/...)
package com.example.rdbms.sql.plan;
import com.example.rdbms.sql.AstNodes;
public record CreateTablePlan(AstNodes.CreateTable ast) implements Plan {}
public record InsertPlan(AstNodes.Insert ast) implements Plan {}
public record SelectPlan(AstNodes.Select ast) implements Plan {}
public record BeginPlan() implements Plan {}
public record CommitPlan() implements Plan {}
public record RollbackPlan() implements Plan {}
public record ShowTablesPlan() implements Plan {}
public record ShowSchemaPlan(com.example.rdbms.sql.AstNodes.ShowSchema ast) implements Plan {}

执行器与引擎对接:

// src/main/java/com/example/rdbms/sql/Executor.java
package com.example.rdbms.sql;

import com.example.rdbms.engine.meta.Catalog;
import com.example.rdbms.engine.meta.TableSchema;
import com.example.rdbms.engine.storage.TableHeap;
import com.example.rdbms.engine.txn.TransactionManager;
import com.example.rdbms.sql.plan.*;

import java.util.*;

public class Executor {
  private final Catalog catalog = Catalog.get();
  private final TransactionManager txm = TransactionManager.get();

  public Object execute(Plan plan) {
    return switch (plan) {
      case CreateTablePlan p -> doCreate(p);
      case InsertPlan p -> doInsert(p);
      case SelectPlan p -> doSelect(p);
      case BeginPlan p -> Map.of("tx", txm.begin());
      case CommitPlan p -> { txm.commit(); yield Map.of("ok", true); }
      case RollbackPlan p -> { txm.rollback(); yield Map.of("ok", true); }
      case ShowTablesPlan p -> catalog.listTables();
      case ShowSchemaPlan p -> catalog.getSchema(p.ast().table()).toString();
      default -> throw new IllegalArgumentException("Unknown plan");
    };
  }

  private Object doCreate(CreateTablePlan p) {
    var ast = p.ast();
    List<TableSchema.Column> cols = new ArrayList<>();
    for (var c : ast.columns()) {
      cols.add(new TableSchema.Column(c.name(), c.type(), !c.notNull()));
    }
    var schema = new TableSchema(ast.table(), cols);
    catalog.createTable(schema);
    return Map.of("ok", true);
  }

  private Object doInsert(InsertPlan p) {
    var ast = p.ast();
    var schema = catalog.getSchema(ast.table());
    var table = catalog.getTable(ast.table());
    int cnt=0;
    for (var row : ast.values()) {
      Map<String,String> kv = new LinkedHashMap<>();
      for (int i=0;i<ast.cols().size();i++) kv.put(ast.cols().get(i), row.get(i));
      table.insert(schema, kv);
      cnt++;
    }
    return Map.of("inserted", cnt);
  }

  private Object doSelect(SelectPlan p) {
    var ast = p.ast();
    var schema = catalog.getSchema(ast.table());
    var table = catalog.getTable(ast.table());
    List<Map<String,Object>> out = new ArrayList<>();
    for (var rec : table.scan(schema, ast.whereEq())) {
      Map<String,Object> row = new LinkedHashMap<>();
      List<String> cols = ast.cols().size()==1 && ast.cols().get(0).equals("*")
          ? schema.columnNames()
          : ast.cols();
      for (String c : cols) row.put(c, rec.get(c));
      out.add(row);
    }
    return out;
  }
}

5. 元数据管理

// src/main/java/com/example/rdbms/engine/meta/Column.java
package com.example.rdbms.engine.meta;
public record Column(String name, String type, boolean nullable) {}
// src/main/java/com/example/rdbms/engine/meta/TableSchema.java
package com.example.rdbms.engine.meta;

import java.util.List;
import java.util.stream.Collectors;

public class TableSchema {
  public record Column(String name, String type, boolean nullable) {}

  private final String tableName;
  private final List<Column> columns;

  public TableSchema(String tableName, List<Column> columns) {
    this.tableName = tableName;
    this.columns = List.copyOf(columns);
  }
  public String tableName(){ return tableName; }
  public List<Column> columns(){ return columns; }
  public List<String> columnNames(){ return columns.stream().map(Column::name).toList(); }

  @Override public String toString() {
    return tableName + "(" + columns.stream()
        .map(c -> c.name()+" "+c.type()+(c.nullable()?"":" NOT NULL"))
        .collect(Collectors.joining(", ")) + ")";
  }
}
// src/main/java/com/example/rdbms/engine/meta/Catalog.java
package com.example.rdbms.engine.meta;

import com.example.rdbms.engine.storage.TableHeap;

import java.io.File;
import java.util.*;
import java.util.concurrent.ConcurrentHashMap;

public class Catalog {
  private static final Catalog INSTANCE = new Catalog();
  public static Catalog get(){ return INSTANCE; }

  private final Map<String, TableSchema> schemas = new ConcurrentHashMap<>();
  private final Map<String, TableHeap> tables = new ConcurrentHashMap<>();

  private Catalog(){}

  public synchronized void createTable(TableSchema schema) {
    if (schemas.containsKey(schema.tableName())) throw new IllegalStateException("table exists");
    schemas.put(schema.tableName(), schema);
    tables.put(schema.tableName(), new TableHeap(schema));
  }

  public TableSchema getSchema(String table){ var s=schemas.get(table); if (s==null) throw new IllegalArgumentException("no table"); return s; }
  public TableHeap getTable(String table){ var t=tables.get(table); if (t==null) throw new IllegalArgumentException("no table"); return t; }
  public List<String> listTables(){ return new ArrayList<>(schemas.keySet()); }
}

6. 存储引擎(页/缓冲池/堆表/WAL)

// src/main/java/com/example/rdbms/engine/storage/PageId.java
package com.example.rdbms.engine.storage;
public record PageId(String table, int pageNo) {}
// src/main/java/com/example/rdbms/engine/storage/Page.java
package com.example.rdbms.engine.storage;

import com.example.rdbms.engine.util.ByteSlice;

public class Page {
  private final PageId id;
  private final ByteSlice data;
  private boolean dirty=false;

  public Page(PageId id, int pageSize){
    this.id = id;
    this.data = new ByteSlice(new byte[pageSize]);
  }
  public PageId id(){ return id; }
  public ByteSlice data(){ return data; }
  public boolean dirty(){ return dirty; }
  public void markDirty(){ dirty=true; }
}
// src/main/java/com/example/rdbms/engine/util/ByteSlice.java
package com.example.rdbms.engine.util;

import java.nio.ByteBuffer;
import java.nio.charset.StandardCharsets;

public class ByteSlice {
  private final byte[] buf;
  public ByteSlice(byte[] buf){ this.buf=buf; }
  public void putInt(int off, int v){ ByteBuffer.wrap(buf, off, 4).putInt(v); }
  public int getInt(int off){ return ByteBuffer.wrap(buf, off, 4).getInt(); }
  public void putBytes(int off, byte[] src){ System.arraycopy(src,0,buf,off,src.length); }
  public byte[] getBytes(int off, int len){ var dst=new byte[len]; System.arraycopy(buf,off,dst,0,len); return dst; }
  public void putString(int off, String s){ var b=s.getBytes(StandardCharsets.UTF_8); putInt(off,b.length); putBytes(off+4,b); }
  public String getString(int off){ int len=getInt(off); return new String(getBytes(off+4,len), StandardCharsets.UTF_8); }
  public byte[] raw(){ return buf; }
}
// src/main/java/com/example/rdbms/engine/storage/BufferPool.java
package com.example.rdbms.engine.storage;

import com.example.rdbms.engine.util.LruCache;

import java.util.Optional;

public class BufferPool {
  private static BufferPool INSTANCE;
  public static synchronized BufferPool get(int capacity){ if (INSTANCE==null) INSTANCE=new BufferPool(capacity); return INSTANCE; }

  private final LruCache<PageId, Page> cache;

  private BufferPool(int capacity){
    cache = new LruCache<>(capacity);
  }
  public synchronized Optional<Page> get(PageId id){ return Optional.ofNullable(cache.get(id)); }
  public synchronized void put(Page p){ cache.put(p.id(), p); }
  public synchronized void evict(PageId id){ cache.remove(id); }
  public synchronized void clear(){ cache.clear(); }
}
// src/main/java/com/example/rdbms/engine/util/LruCache.java
package com.example.rdbms.engine.util;

import java.util.LinkedHashMap;
import java.util.Map;

public class LruCache<K,V> {
  private final int capacity;
  private final LinkedHashMap<K,V> map;

  public LruCache(int capacity){
    this.capacity=capacity;
    this.map=new LinkedHashMap<>(capacity,0.75f,true){
      @Override protected boolean removeEldestEntry(Map.Entry<K,V> eldest){ return size()>LruCache.this.capacity; }
    };
  }
  public synchronized V get(K k){ return map.get(k); }
  public synchronized void put(K k, V v){ map.put(k,v); }
  public synchronized void remove(K k){ map.remove(k); }
  public synchronized void clear(){ map.clear(); }
}
// src/main/java/com/example/rdbms/engine/storage/FileStorage.java
package com.example.rdbms.engine.storage;

import java.io.File;
import java.io.RandomAccessFile;
import java.nio.channels.FileChannel;
import java.nio.file.Files;

public class FileStorage {
  private final File file;
  private final int pageSize;

  public FileStorage(String path, int pageSize){
    this.file = new File(path);
    this.pageSize = pageSize;
    try { Files.createDirectories(file.getParentFile().toPath()); } catch (Exception ignored){}
  }

  public synchronized Page read(PageId id) {
    try (var raf = new RandomAccessFile(file, "rw"); var ch = raf.getChannel()){
      long pos = (long) id.pageNo() * pageSize;
      if (pos >= ch.size()) return new Page(id, pageSize);
      var p = new Page(id, pageSize);
      ch.position(pos);
      ch.read(java.nio.ByteBuffer.wrap(p.data().raw()));
      return p;
    } catch (Exception e){ throw new RuntimeException(e); }
  }

  public synchronized void write(Page p) {
    try (var raf = new RandomAccessFile(file, "rw"); var ch = raf.getChannel()){
      long pos = (long) p.id().pageNo() * pageSize;
      ch.position(pos);
      ch.write(java.nio.ByteBuffer.wrap(p.data().raw()));
    } catch (Exception e){ throw new RuntimeException(e); }
  }

  public synchronized int allocateNextPageNo(){
    try (var raf = new RandomAccessFile(file, "rw"); var ch = raf.getChannel()){
      long size = ch.size();
      return (int)(size / pageSize);
    } catch (Exception e){ throw new RuntimeException(e); }
  }
}
// src/main/java/com/example/rdbms/engine/storage/Row.java
package com.example.rdbms.engine.storage;

import java.util.LinkedHashMap;
import java.util.Map;

public class Row {
  private final Map<String,Object> cells = new LinkedHashMap<>();
  public void set(String col, Object val){ cells.put(col, val); }
  public Object get(String col){ return cells.get(col); }
  public Map<String,Object> asMap(){ return cells; }
}
// src/main/java/com/example/rdbms/engine/storage/RedoLog.java
package com.example.rdbms.engine.storage;

import java.io.File;
import java.io.FileOutputStream;
import java.nio.charset.StandardCharsets;

public class RedoLog {
  private final File file;

  public RedoLog(String path){
    this.file = new File(path);
    try { file.getParentFile().mkdirs(); file.createNewFile(); } catch (Exception ignored){}
  }
  public synchronized void append(String record){
    try (var fos = new FileOutputStream(file, true)) {
      fos.write((record+"\n").getBytes(StandardCharsets.UTF_8));
      fos.flush();
    } catch (Exception e){ throw new RuntimeException(e); }
  }
}
// src/main/java/com/example/rdbms/engine/storage/TableHeap.java
package com.example.rdbms.engine.storage;

import com.example.rdbms.engine.meta.TableSchema;

import java.util.*;

public class TableHeap {
  private final TableSchema schema;
  private final FileStorage storage;
  private final RedoLog redo;
  private final int pageSize = com.example.rdbms.config.EngineConfig.pageSize();
  private final BufferPool pool = BufferPool.get(com.example.rdbms.config.EngineConfig.bufferPoolSize());

  public TableHeap(TableSchema schema){
    this.schema=schema;
    this.storage=new FileStorage("./data/"+schema.tableName()+".tbl", pageSize);
    this.redo=new RedoLog("./data/redo.log");
  }

  public void insert(TableSchema schema, Map<String,String> kv){
    // 简化:每行序列化为长度+每列长度+内容,逐页 append
    StringBuilder rowSer = new StringBuilder();
    for (var col : schema.columns()) {
      rowSer.append(kv.getOrDefault(col.name(), "NULL")).append('\u0001');
    }
    String rec = rowSer.toString();
    writeRecord(rec);
    redo.append("INSERT " + schema.tableName() + " " + rec);
  }

  public List<Map<String,Object>> scan(TableSchema schema, Map<String,String> whereEq){
    List<Map<String,Object>> out = new ArrayList<>();
    int pageNo = 0;
    while (true) {
      PageId pid = new PageId(schema.tableName(), pageNo);
      Page p = pool.get(pid).orElseGet(() -> {
        Page loaded = storage.read(pid);
        pool.put(loaded);
        return loaded;
      });
      byte[] buf = p.data().raw();
      boolean empty = true;
      int off=0;
      while (off < buf.length) {
        int end = findZero(buf, off);
        if (end <= off) break;
        empty = false;
        String rec = new String(Arrays.copyOfRange(buf, off, end));
        String[] vals = rec.split("\u0001",-1);
        Map<String,Object> row = new LinkedHashMap<>();
        for (int i=0;i<schema.columns().size();i++) {
          row.put(schema.columns().get(i).name(), vals.length>i?vals[i]:"NULL");
        }
        if (match(whereEq, row)) out.add(row);
        off = end + 1;
      }
      if (empty) break;
      pageNo++;
    }
    return out;
  }

  private boolean match(Map<String,String> where, Map<String,Object> row){
    if (where==null || where.isEmpty()) return true;
    for (var e : where.entrySet()) {
      if (!Objects.equals(String.valueOf(row.get(e.getKey())), e.getValue())) return false;
    }
    return true;
  }

  private void writeRecord(String rec){
    // 简化页布局:记录以 0 结尾分隔,页满则写入下一页
    byte[] data = (rec).getBytes();
    int pageNo = storage.allocateNextPageNo();
    if (pageNo == 0) pageNo = 0;
    // 尝试追加到最后一页
    if (pageNo > 0) pageNo -= 1;
    while (true) {
      PageId pid = new PageId(schema.tableName(), pageNo);
      Page p = pool.get(pid).orElseGet(() -> { Page loaded=storage.read(pid); pool.put(loaded); return loaded; });
      byte[] buf = p.data().raw();
      int off = findEnd(buf);
      if (off + data.length + 1 <= buf.length) {
        System.arraycopy(data,0,buf,off,data.length);
        buf[off + data.length] = 0;
        p.markDirty();
        storage.write(p);
        return;
      } else {
        // 新页
        pageNo++;
        storage.write(new Page(new PageId(schema.tableName(), pageNo), pageSize));
      }
    }
  }

  private int findEnd(byte[] buf){
    int i=0; while (i<buf.length && buf[i]!=0) { while (i<buf.length && buf[i]!=0) i++; i++; }
    return i;
  }
  private int findZero(byte[] buf, int from){
    int i=from; while (i<buf.length && buf[i]!=0) i++; return i<buf.length?i:from;
  }
}

说明:

  • 页大小固定 8KB(配置项)。
  • 记录存储极简(0 字节分隔),便于演示;生产应使用槽目录+可变长布局。
  • Redo 仅记录 INSERT 文本,重启时可扩展扫描重放。

7. 事务与锁

// src/main/java/com/example/rdbms/engine/txn/TransactionManager.java
package com.example.rdbms.engine.txn;

import java.util.ArrayDeque;
import java.util.Deque;
import java.util.UUID;

public class TransactionManager {
  private static final TransactionManager INSTANCE = new TransactionManager();
  public static TransactionManager get(){ return INSTANCE; }

  private final ThreadLocal<Deque<String>> txStack = ThreadLocal.withInitial(ArrayDeque::new);

  public String begin(){
    String id = UUID.randomUUID().toString();
    txStack.get().push(id);
    return id;
  }
  public void commit(){
    ensureTx();
    txStack.get().pop();
  }
  public void rollback(){
    ensureTx();
    txStack.get().pop();
  }
  public String current(){
    return txStack.get().peek();
  }
  private void ensureTx(){
    if (txStack.get().isEmpty()) throw new IllegalStateException("no active tx");
  }
}
// src/main/java/com/example/rdbms/engine/txn/LockManager.java
package com.example.rdbms.engine.txn;

import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
import java.util.concurrent.TimeUnit;
import java.util.concurrent.locks.ReentrantReadWriteLock;

public class LockManager {
  private static final LockManager INSTANCE = new LockManager();
  public static LockManager get(){ return INSTANCE; }

  private final Map<String, ReentrantReadWriteLock> tableLocks = new ConcurrentHashMap<>();

  public void lockTableExclusive(String table, long timeoutMs){
    tableLocks.computeIfAbsent(table, k -> new ReentrantReadWriteLock());
    try {
      boolean ok = tableLocks.get(table).writeLock().tryLock(timeoutMs, TimeUnit.MILLISECONDS);
      if (!ok) throw new RuntimeException("lock timeout");
    } catch (InterruptedException e){ Thread.currentThread().interrupt(); throw new RuntimeException(e); }
  }
  public void unlockTableExclusive(String table){
    var lock = tableLocks.get(table);
    if (lock!=null && lock.writeLock().isHeldByCurrentThread()) lock.writeLock().unlock();
  }
}

(为简洁,示例未将锁集成到执行器调用流程中;实战中在 INSERT/DDL 前后获取表级锁,再释放。)


8. 配置入口

// src/main/java/com/example/rdbms/config/EngineConfig.java
package com.example.rdbms.config;

import org.springframework.beans.factory.annotation.Value;
import org.springframework.stereotype.Component;

@Component
public class EngineConfig {
  private static int PAGE_SIZE;
  private static int BP_SIZE;

  public EngineConfig(
      @Value("${rdbms.pageSize:8192}") int pageSize,
      @Value("${rdbms.bufferPoolSize:128}") int bufferPoolSize
  ) {
    PAGE_SIZE = pageSize; BP_SIZE = bufferPoolSize;
  }
  public static int pageSize(){ return PAGE_SIZE; }
  public static int bufferPoolSize(){ return BP_SIZE; }
}

9. 基础用例测试

// src/test/java/com/example/rdbms/BasicFlowTest.java
package com.example.rdbms;

import com.example.rdbms.sql.Executor;
import com.example.rdbms.sql.Planner;
import com.example.rdbms.sql.SqlParser;
import org.junit.jupiter.api.Test;

import static org.junit.jupiter.api.Assertions.*;

public class BasicFlowTest {
  @Test
  void flow() {
    var parser = new SqlParser();
    var planner = new Planner();
    var exec = new Executor();

    exec.execute(planner.plan(parser.parse("CREATE TABLE users (id INT, name VARCHAR)")));
    exec.execute(planner.plan(parser.parse("INSERT INTO users (id,name) VALUES (1,'Alice'),(2,'Bob')")));
    var res = exec.execute(planner.plan(parser.parse("SELECT id,name FROM users WHERE id=1")));
    assertTrue(res.toString().contains("Alice"));
  }
}

10. 运行与示例

  • 构建与启动 mvn spring-boot:run
  • 执行 SQL(HTTP) POST http://localhost:8088/sql/execute JSON: {"sql":"CREATE TABLE users (id INT, name VARCHAR)"} {"sql":"INSERT INTO users (id,name) VALUES (1,'Alice'),(2,'Bob')"} {"sql":"SELECT id,name FROM users WHERE id=1"} {"sql":"SHOW TABLES"} {"sql":"SHOW SCHEMA users"} {"sql":"BEGIN"} / {"sql":"COMMIT"}

11. 说明与后续演进

  • 为什么选 Spring Boot:给 SQL 引擎一个最小可用的交互壳(REST),便于演示与自动化。
  • 你可以按规范继续增强: SQL 解析:引入 ANTLR 语法文件,覆盖 SQL-92/2008 子集。 存储:槽目录、MVCC(Undo/版本链)、二级索引(B+树)、压缩。 事务:隔离级别(RC/ RR / SI)、两阶段提交、崩溃恢复(ARIES 风格)。 锁:行级锁/意向锁、死锁检测(等待图)。 优化器:基于规则与代价(统计信息、选择率、连接顺序)。 元数据:系统表持久化与 DDL 日志。 可观测性:Micrometer 指标(QPS、P99)、结构化日志、OpenTelemetry 追踪。

这套原型遵循你的后端编码规范(分层、命名、日志/异常入口、测试/覆盖),足够支撑团队做架构评审、演示 ACID 与页/缓冲池/WAL 的工作原理,并为进一步工程化演进留出清晰扩展点。你要不要我再补一份基于 ANTLR 的 SQL 语法与生成步骤,直接把 parser 升级为 LL(*) 级别?

点击这里复制本文地址 以上内容由文彬编程网整理呈现,请务必在转载分享时注明本文地址!如对内容有疑问,请联系我们,谢谢!
qrcode

文彬编程网 © All Rights Reserved.  蜀ICP备2024111239号-4