Zero Trust Security
Assumptions are the mother of all mistakes. I design systems that verify everything, trust nothing, and minimize attack surfaces.
Offensive Security Researcher & Senior Software Engineer
View my Resume
Principles that guide my approach to security research and software engineering.
Assumptions are the mother of all mistakes. I design systems that verify everything, trust nothing, and minimize attack surfaces.
Complexity is where vulnerabilities hide. I fight bloat to keep systems auditable, maintainable, and inherently secure.
Human labor is error-prone. I automate repetitive tasks to ensure consistency, reduce mistakes, and free up time for creative problem-solving.
Heuristics is exploitable. I trust logs, metrics, and Proof-of-Concepts (PoCs) over gut feelings to guide my architectural and security choices.
Whether I design for millions of users or a niche audience, I prioritize speed and efficiency to deliver seamless experiences.
Monoliths get messy. I build systems with interchangeable components to enhance flexibility, scalability, and ease of maintenance.
I publish detailed write-ups on my latest security research findings after full remediation & responsible disclosure.
Security research, infrastructure engineering, and the occasional rabbit hole.
Reverse engineering Cloudflare Turnstile, Google reCaptcha (v2/v3/invisible), and DDoS-Guard to automate data collection in hostile environments.
The Why: Targeted threat communities actively weaponize anti-bot technology (CAPTCHAs, Proof-of-Work) to hide their data. Standard scrapers fail here; if you can't bypass the gate, you gain no intelligence.
The How: I moved beyond WebDriver to direct CDP (Chrome DevTools Protocol) injection for stealth automation. For reCAPTCHA, I built a standalone solver endpoint achieving 0.9+ confidence through behavioral pattern replication. The harder problem was proprietary obfuscated PoW challenges. I reverse engineered multiple cryptographic proof-of-work implementations to extract the validation logic and replicate it server-side.
The Challenge: Bypassing the check is only step one. The real hurdle is preventing chain-bans in a distributed system. I engineered a custom Session Rotator with distributed locking (Redis/ZooKeeper) that ensures accounts are only 'checked out' by one worker at a time, preventing concurrent usage flags.
A high-concurrency watchdog for monitoring unauthorized application distribution across unregulated third-party stores.
The Why: When modified banking or telco apps circulate on grey markets, they bypass business logic and compromise users. We needed to detect these 'mods' the moment they were uploaded.
The How: I built a pipeline that scrapes 30+ shadow app stores for both APK/IPA binaries and structured metadata (version history, permissions, developer info). For official Play Store data, I ported EEF's rs-google-play (a Rust-based reverse-engineered Google Play API) to Python using PyO3 and Maturin. The collected files feed into an automated SAST engine (MobSF) for binary decompilation and diff analysis against official releases.
The Challenge: Ironically, poorly developed websites are harder to scrape than secure ones. Shadow stores often have broken HTML, non-standard DOMs, and anti-hotlinking measures. The difficulty wasn't just the scale; it was writing parsers robust enough to handle the chaos of the grey web.
Processing terabytes of unstructured data from leak sites and dark web forums into structured, queryable intelligence.
The Why: Raw data from the dark web is useless if it isn't searchable. We needed a way to correlate a handle on a Russian forum with a database leak on a file-sharing site instantly.
The How: I architected a modular ingestion engine using RabbitMQ and ZooKeeper to handle the throughput. Crucially, I enforced strict schema validation using Protocol Buffers (Protobuf). This forces unstructured forum HTML into a strict binary format, making the data immutable and typed before it hits our Data Lake.
The Challenge: Forums built on the same underlying frameworks (XenForo, phpBB, vBulletin) share DOM structures but implement custom anti-scraping logic. I wrote modular parsers that inherit base extraction logic per platform type, reducing code duplication significantly. The real challenge is handling unreliable data: missing fields, inconsistent encodings, malformed timestamps. The system validates and normalizes on ingestion, logging failures for manual review rather than silently corrupting the dataset.
Developing secure, self-hosted alternatives for sensitive internal operations using Go.
The Why: Using public tools (like Pastebin) for internal security operations is an OPSEC failure. We needed a fast, internal, air-gapped solution for sharing sensitive payloads and configs.
The How: I wrote 'Pasty', a high-performance storage engine in Go. To minimize maintenance, I architected it to be database-less; it uses S3 object metadata for state management. This allows us to spin up instances instantly via Docker without managing complex SQL migrations.
The Challenge: Simplicity shouldn't compromise functionality. I implemented a full GUI and API interface that supports advanced security features like 'Burn-After-Read', password protection, and auto-expiration purely via metadata logic.
A quick overview of the tools, technologies, and methodologies I employ regularly.
ENGINEERING
SECURITY
TOOLS
I'm a polyglot developer. Here's a breakdown of the top programming languages I've used this year.
| Language | Time spent | Percentage |
|---|---|---|
| | 904 hrs 25 mins | 57.1% |
| | 192 hrs 34 mins | 12.2% |
| | 100 hrs 54 mins | 6.4% |
| | 82 hrs 32 mins | 5.2% |
| | 60 hrs 22 mins | 3.8% |
| | 55 hrs 50 mins | 3.5% |
| | 41 hrs 27 mins | 2.6% |
| | 30 hrs 30 mins | 1.9% |
| | 20 hrs 15 mins | 1.3% |
| | 17 hrs 42 mins | 1.1% |
| 64 languages in total | ||