Live Updated 2026-04

GameLook

A platform that helps returning players understand what changed in games since they last played them.

TypeScript Node.js React PostgreSQL

Problem

Game update information is fragmented and noisy. Players returning to a game after months or years usually have to dig through patch notes, announcement posts, DLC listings, store pages, and release timelines.

The result is high friction and poor discoverability. Even when the data exists, it is not stored in a way that makes historical understanding easy.

Goal

Build a system that can collect game updates continuously, structure them into readable timelines, avoid repeated expensive crawling, support followed games, and summarize large volumes of change in a usable way.

Source Monitoring Model

The system continuously monitors external sources for changes. The challenge is deciding which games deserve proactive crawling versus on-demand crawling — and structuring update history so it remains useful over time.

Hot vs Cold Game Strategy

Without a tiered crawl strategy, the platform risks wasting resources by treating every game equally. A small number of games drive most traffic and need low-latency answers. Everything else can tolerate slower on-demand processing.

  • Hot games — popular titles worth pre-crawling and keeping warm
  • Cold games — niche titles crawled on demand

This split allows lower infrastructure cost, better perceived responsiveness for popular games, controlled crawl frequency, and better reuse of stored results.

Data Storage and Cache Reuse

Storage and cache reuse were optimized to reduce repeated parsing and fetches. The system focuses on structured historical records instead of one-off summaries, and treats freshness, cost, and latency as competing concerns that need explicit trade-offs.

Trade-offs

The obvious but bad version of this system is: fetch often, parse everything repeatedly, summarize on demand every time. That becomes expensive fast.

This project forced careful thinking around cache invalidation, data freshness, crawl prioritization, immutable history vs reprocessed summaries, and cost-aware architecture.

Lessons Learned

The hardest part is not building the crawler — it is deciding when and how often to crawl. Every decision about freshness has a cost implication, and every caching decision has a correctness implication. The system design is the product.