Feed aggregator

Qcut – Free browser video editor (no install, no signup)

Hacker News - Fri, 03/06/2026 - 12:42am

Article URL: https://qcut.app/

Comments URL: https://news.ycombinator.com/item?id=47271311

Points: 3

# Comments: 2

Categories: Hacker News

Ask HN: Do you have a good solution for isolated workspaces per project?

Hacker News - Fri, 03/06/2026 - 12:33am

I often work on 2-3 projects at once. They each have some combination of: - terminal windows - browser for testing - browser for research, brainstorming, etc - documents & finder windows - various tools (expo, etc)

I have a lot of trouble keeping these separate. I use MacOS - you can have many desktops, but they're all in the same workspace. I think what I was is something like tmux for my whole computer, where I can switch away from a project and come back and be where I left off, with only the content from that project.

I actually tried to build this myself as the OS level, but Mac seems to lock everything down pretty hard.

Anybody have a good solution?

Comments URL: https://news.ycombinator.com/item?id=47271254

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Triplecheck – Review your code free with local LLMs

Hacker News - Fri, 03/06/2026 - 12:03am

Hey HN, I built triplecheck because I wanted deep AI code review without paying $24/mo per seat.

The idea: instead of one LLM pass that drops comments (like CodeRabbit/Sourcery), triplecheck runs a full loop:

1. Reviewer finds bugs → structured findings with file, line, severity 2. Coder writes actual patches (search/replace diffs, not suggestions) 3. Tests run automatically to catch regressions 4. Loop until no new findings or max rounds 5. Judge scores the final result 0–10

The key insight: with local LLMs, compute is free, so you can afford to be thorough. Run 5 review passes from different angles, vote to filter noise, let the coder fix everything, and re-review until clean. Try doing that with a $0.03/1K token API.

What works well: - Qwen3-Coder on vLLM/Ollama handles reviewer + coder surprisingly well - Multi-pass voting genuinely reduces false positives — 3 passes agreeing > 1 pass guessing - Tree-sitter dependency graph means the reviewer sees related files together, not random batches - Scanned a 136K-line Go codebase (70 modules) — found real bugs, not just style nits

What's missing (honest): - No GitHub PR integration yet (CLI only — you run it, read the report). This is the #1 gap vs CodeRabbit. It's on the roadmap. - No incremental/diff-only review — it reviews whole files. Fine for local LLMs (free), wasteful for cloud APIs. - Local LLMs still hallucinate fixes sometimes. The test gate catches most of it, but you should review the diff before merging.

Stack: Python, Click CLI, any OpenAI-compatible backend. Works with vLLM, Ollama, LM Studio, DeepSeek, OpenRouter, Claude CLI. Mix and match — e.g. local Qwen running on M3 Ultra for reviewer/coder + Claude for judge.

Would love feedback, especially from anyone running local models for dev tools. What review capabilities would make you actually use this in your workflow?

Comments URL: https://news.ycombinator.com/item?id=47271100

Points: 1

# Comments: 0

Categories: Hacker News

Gemini-flash-latest silently broke Search grounding for 1 month

Hacker News - Fri, 03/06/2026 - 12:03am

On January 21, Google quietly changed the gemini-flash-latest alias to point to gemini-3-flash-preview — a model that does not support Google Search grounding.

The API never returned an error. HTTP 200, valid JSON, finish reason: STOP. The only thing missing was groundingMetadata. No warning, no deprecation notice, nothing.

I run PartsplanAI, a B2B electronic components marketplace. Grounding is not optional for us — we use it to verify part specs against real datasheets and prevent hallucination. Wrong capacitance values or voltage ratings from a language model aren't just embarrassing; they cause real problems for engineers downstream.

For approximately one month (late January through February 27), our AI features ran without any grounding. Every part recommendation, every spec search — generated purely from the model's pre-trained knowledge. The corrupted data accumulated in our database and was served to B2B customers. We had no idea.

On February 27, I noticed recommendations weren't matching real datasheets. What followed was 16 hours of debugging — 63 Git commits, 13 different approaches. I rewrote prompts, rebuilt the search pipeline, changed configurations, adjusted timeouts, switched between parallel and sequential calls. Nothing worked, because the problem was never in my code.

The fix: switch to gemini-2.5-flash. 20 minutes. Done.

The changelog entry for January 21 reads only: "gemini-flash-latest alias now points to gemini-3-flash-preview"

No mention of grounding regression. No compatibility warning.

There's also a second undocumented behavior: on gemini-2.5-flash, if you set responseMimeType: 'application/json' and googleSearch simultaneously, the search is silently ignored. No error, no docs, no warning.

GitHub Issue #384 (google-gemini/generative-ai-js) confirms the grounding issue was known in the community before the alias change was made.

The January 21 changelog was published the same week as the Gemini 2.0 Flash general availability announcement.

If you're using gemini-flash-latest with grounding, verify that groundingMetadata is actually present in your responses. You may have been affected since January 21.

Comments URL: https://news.ycombinator.com/item?id=47271099

Points: 2

# Comments: 0

Categories: Hacker News

Show HN: WingNews – Htmx Hacker News Reader

Hacker News - Thu, 03/05/2026 - 11:51pm

WingNews is a dark mode Hacker News reader client built with HTMX and Go. Any suggestions are greatly appreciated.

Comments URL: https://news.ycombinator.com/item?id=47271019

Points: 1

# Comments: 0

Categories: Hacker News

Ask HN: Wish Linux tmpfs support compression option

Hacker News - Thu, 03/05/2026 - 11:51pm

I run lots of unprivileged containers, and apps inside create tons of temporary files. tmpfs is perfect for this (easy to mount in unprivileged containers). Adding a compression feature would help a lot.

zram needs root to manage. mkfs.btrfs setup feels way too heavy for what is basically “compressed /tmp”.

Why has tmpfs never gotten an official compression feature?

Comments URL: https://news.ycombinator.com/item?id=47271012

Points: 1

# Comments: 0

Categories: Hacker News

Create PDF Resume

Hacker News - Thu, 03/05/2026 - 11:43pm

Article URL: https://createpdfresume.com/

Comments URL: https://news.ycombinator.com/item?id=47270955

Points: 1

# Comments: 0

Categories: Hacker News

Show HN: Yappy – A Python TUI to automate LinkedIn yapping

Hacker News - Thu, 03/05/2026 - 11:22pm

Hey HN,

I got tired of the performative culture on LinkedIn. The platform mostly feels like people just yapping at each other all day to farm engagement, so I decided to build a CLI tool to do the yapping for me. It's called Yappy (because we just yap yap yap).

It is an open-source Python TUI that automates your LinkedIn engagement directly from the command line. It logs in, reads your feed, and uses the Gemini API to generate context-aware comments and drop likes based on your prompt parameters. Everything runs completely in the terminal, so you never actually have to look at the feed yourself.

I know AI-generated comments and web scraping are controversial, and LinkedIn's TOS strictly prohibits this kind of automation. This is built purely as an experimental toy project and a proof-of-concept for integrating LLMs with terminal interfaces. If you actually decide to run it, definitely use a burner account, as LinkedIn will likely restrict you if you run it too aggressively.

I am mostly looking for technical feedback on the Python code architecture and the TUI implementation.

Would love to hear your thoughts. Roast my code or drop a PR if you find the concept funny.

Comments URL: https://news.ycombinator.com/item?id=47270845

Points: 2

# Comments: 0

Categories: Hacker News

Pages