Hacker News

Subscribe to Hacker News feed
Hacker News RSS
Updated: 41 min 38 sec ago

Your AI Slop Bores Me

Fri, 03/06/2026 - 1:22pm

Article URL: https://youraislopbores.me/

Comments URL: https://news.ycombinator.com/item?id=47278921

Points: 1

# Comments: 1

Categories: Hacker News

Security Scanner for Agent Skills

Fri, 03/06/2026 - 1:21pm
Categories: Hacker News

Show HN: Go-TUI – A framework for building declarative terminal UIs in Go

Fri, 03/06/2026 - 1:19pm

I've been building go-tui (https://go-tui.dev), a terminal UI framework for Go inspired by the templ framework for the web (https://templ.guide/). The syntax should be familiar to templ users and is quite different from other terminal frameworks like bubbletea. Instead of imperative widget manipulation or bubbletea's elm architecture, you write HTML-like syntax and Tailwind-style classes that can intermingle with regular Go code in a new .gsx filetype. Then you compile these files to type-safe Go using `tui generate`. At runtime there's a flexbox layout engine based on yoga that handles positioning and a double-buffered renderer that diffs output to minimize terminal writes.

Here are some other features in the framework:

- It supports reactive state with State[T]. You change a value and the framework redraws for you. You can also forego reactivity and simply use pure components if you would like.

- You can render out a single frame to the terminal scrollback if you don't care about UIs and just want to place a box, table, or other styled component into your stdout. It's super handy and avoids the headache of dealing with the ansi escape sequences directly.

- It supports an inline mode that lets you embed an interactive widget in your shell session instead of taking over the full screen. With it you can build things like custom streaming chat interfaces directly in the terminal.

- I built full editor support for the new filetype. I published a VS Code and Open-VSX extension with completion, hover, and go-to-definition. Just search for "go-tui" in the marketplace to find them. The repo also includes a tree-sitter grammar for Neovim/Helix, and an LSP that proxies Go features through gopls so the files are easy to work with.

There are roughly 20 examples in the repo covering everything from basic components to a dashboard with live metrics and sparklines. I also built an example wrapper for claude code if you wanted to build your own AI chat interface.

Docs & guides: https://go-tui.dev

Repo: https://github.com/grindlemire/go-tui

I'd love feedback on the project!

Comments URL: https://news.ycombinator.com/item?id=47278869

Points: 1

# Comments: 0

Categories: Hacker News

Never Bet Against x86

Fri, 03/06/2026 - 1:17pm
Categories: Hacker News

Show HN: Max – a federated data query layer for AI agents (and humans)

Fri, 03/06/2026 - 1:14pm

Hey HN! I built a thing and I'm really excited to share it.

EDIT: I meant to link to the github, not the website: https://github.com/max-hq/max

Like many of us here, I've been commonly reaching for a pattern of "pull data into db; give it to claude" for a while, whilst doing data spelunking or building tooling - for the same reasons mentioned by thellimist over here [1] and a few other recent "CLI vs MCP" posts.

To that end, about a month ago I started building a project called `max` - its goal is to cut the middleman and schematise any data source for you. Essentially, provide a lingua-franca for synchronising and searching data.

In short: Max exposes a CLI for any given data source, and mirrors it locally. As in, puts that data right next to the agent. It means search is local and fast, and ready for cut, sed, grep, sort etc.

More concretely:

> max connect @max/connector-gmail --name gmail-1 > max sync gmail-1

> # show me what data i can search for > max schema @max/conector-gmail

> # do a search > max search gmail-1 --filter"subject ~= Apples" --fields=subject,from,time

I've built a few connectors over at `max-hq/max-connectors` - but the goal is that they're easy to create (sync is done via graph walk - max makes you provide field resolution so it can figure out how to sync).

In practice - I've found that telling claude to run "max -g llm-bootstrap" to get acquainted, and then "make a connector for X" also works pretty well :).

There's a lot still to come(!) - realtime, somewhere to host connectors, exposing and serving max nodes... I'll be updating the roadmap over the next couple of days - but I didn't want to wait any longer before sharing here.

(on that note - max is designed for federation. The core is platform agnostic)

In terms of what this approach makes possible - I ran a benchmark on a challenge (it's the one on the website) asking claude to find me names of a particular form from a fairly chunky hubspot (100k contacts). The metrics are roughly what you'd expect from putting the data local and avoiding any tokens hitting claude's context window:

MCP: 18M tokens | 80m time | $180 cost

Max: 238 tokens | 27s time | $0.003 cost

(I'll explain how these numbers were calculated in a new reply)

It's still early (alpha) but if you're building agents or just want local data, please try it and tell me what breaks.

Thanks!

[1] https://news.ycombinator.com/item?id=47157398

Comments URL: https://news.ycombinator.com/item?id=47278802

Points: 3

# Comments: 0

Categories: Hacker News

Show HN: MyChatArchive – bring your full ChatGPT history into Claude via MCP

Fri, 03/06/2026 - 1:14pm

Switched from ChatGPT to Claude and realized the official migration only transfers what ChatGPT remembers about you, not your actual conversations. Built a local pipeline that imports full exports, generates semantic embeddings on your machine, and serves them via MCP server. Claude Desktop and Cursor can search your entire chat history by meaning during any conversation. No cloud, no API keys for core pipeline. Also supports Claude Code, Cursor, and Grok exports.

Comments URL: https://news.ycombinator.com/item?id=47278798

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Anchor Engine – Deterministic Semantic Memory for LLMs Local (<3GB RAM)

Fri, 03/06/2026 - 11:27am

Anchor Engine is ground truth for personal and business AI. A lightweight, local-first memory layer that lets LLMs retrieve answers from your actual data—not hallucinations. Every response is traceable, every policy enforced. Runs in <3GB RAM. No cloud, no drift, no guessing. Your AI's anchor to reality.

We built Anchor Engine because LLMs have no persistent memory. Every conversation is a fresh start—yesterday's discussion, last week's project notes, even context from another tab—all gone. Context windows help, but they're ephemeral and expensive. The STAR algorithm (Semantic Traversal And Retrieval) takes a different approach. Instead of embedding everything into vector space, STAR uses deterministic graph traversal. But before traversal comes atomization—our lightweight process for extracting just enough conceptual structure from text to build a traversable semantic graph.

*Atomization, not exhaustive extraction.* Projects like Kanon 2 are doing incredible work extracting every entity, citation, and clause from documents with remarkable precision. That's valuable for document intelligence. Anchor Engine takes a different path: we extract only the core concepts and relationships needed to support semantic memory. For example, "Apple announced M3 chips with 15% faster GPU performance" atomizes to nodes for [Apple, M3, GPU] and edges for [announced, has-performance]. Just enough structure for retrieval, lightweight enough to run anywhere.

The result is a graph that's just rich enough for an LLM to retrieve relevant context, but lightweight enough to run offline in <3GB RAM—even on a Raspberry Pi or in a browser via WASM.

*Why graph traversal instead of vector search?*

- Embeddings drift over time and across models - Similarity scores are opaque and nondeterministic - Vector search often requires GPUs or cloud APIs - You can't inspect why something was retrieved

STAR gives you deterministic, inspectable results. Same graph, same query, same output—every time. And because the graph is built through atomization, it stays small and portable.

*Key technical details:*

- Runs entirely offline in <3GB RAM. No API calls, no GPUs. - Compiled to WASM – embed it anywhere, including browsers. - Recursive architecture – we used Anchor Engine to help write its own code. The dogfooding is real: what would have taken months of context-switching became continuous progress. I could hold complexity in my head because the engine held it for me. - AGPL-3.0 – open source, always.

*What it's not:* It's not a replacement for LLMs or vector databases. It's a memory layer—a deterministic, inspectable substrate that gives LLMs persistent context without cloud dependencies. And it's not a competitor to deep extraction models like Kanon 2; they could even complement each other (Kanon 2 builds the graph, Anchor Engine traverses it for memory).

*The whitepaper* goes deep on the graph traversal math and includes benchmarks vs. vector search: https://github.com/RSBalchII/anchor-engine-node/blob/d9809ee...

If you've ever wanted LLM memory that fits on a Raspberry Pi and doesn't hallucinate what it remembers—check it out, and I'd love your feedback on where graph traversal beats (or loses to) vector search.

We're especially interested in feedback from people who've built RAG systems, experimented with symbolic memory, or worked on graph-based AI.

Comments URL: https://news.ycombinator.com/item?id=47277084

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Decidel A Hacker News client for iOS with smart summaries and filtering

Fri, 03/06/2026 - 11:23am

I've been reading HN every day for months and always wished the experience was a bit smarter less noise, more signal, without losing the depth that makes HN worth reading. Decidel is what I built to fix that. It's an iOS client with AI-powered thread summaries, semantic topic filtering (mute topics you don't care about), threaded discussions, offline reading, and export to Markdown, Notion, or Obsidian. You bring your own API This is a rapid first release. A web version is in the works. Happy to answer any questions and would genuinely appreciate any feedbacks especially from daily HN readers.

App Store https://apps.apple.com/app/decidel/id6759561178

Comments URL: https://news.ycombinator.com/item?id=47277018

Points: 1

# Comments: 1

Categories: Hacker News

Don't Get Distracted

Fri, 03/06/2026 - 11:23am
Categories: Hacker News

Pages