I build large-scale financial settlement pipelines and optimize data-intensive systems.
- Engineer focusing on high-throughput batch processing, reconciliation, and financial ledgers
- Experienced with multi-node batch workloads, partitioning, and query tuning
- Designs systems with low-latency caching (Caffeine) and distributed in-memory data grids (Hazelcast)
- Trained to think with locking strategies, concurrency control, and transaction performance
- Writes JPA only for read-optimized access; all heavy writes go through custom JDBC writers
- Obsessed with profiling: GC analysis, heap behavior, execution traces, and DB plan tuning
- Strong internal writer: concurrency series, settlement batch testing guides, practical SQL articles
- π Winner of Angelhack Seoul 2024 β Financial Track
- Optimizing cross-DB reconciliation (EPAS β Oracle β MSSQL)
- Performance experiments on:
- pessimistic vs optimistic lock latency
- distributed cache hit ratio tuning
- batch partition sizing
- JVM memory & GC analysis
- Enhancing financial validation flows, writing 50+ verification queries to detect anomalies early
- Distributed caching strategies (Caffeine β Hazelcast IMap/Tiered)
- Locking semantics, transaction isolation, deadlock analysis
- High-performance SQL (execution plan reasoning, join strategy, partitioned read)
- Operational visibility (Metabase dashboards, batch metrics, alerting flows)
- GC analysis, memory footprint optimization
- Clean, maintainable architecture & technical writing
- Writes internal engineering articles:
- Concurrency in Java (series)
- Batch Testing Strategy for Financial Systems
- Markdown to PPT: Developer-friendly Documentation Workflow
- Contributes to team knowledge sharing and engineering culture improvement
- Leads retrospectives and drives continuous process refinement
- Jazz & Rock listener
- Minimal, text-focused web design (building tinawiki.com in pure HTML/CSS/JS)
- Loves writing clear documentation and deeply reasoned architecture notes



